entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 14
193
| authors
sequencelengths 1
1.14k
| primary_category
stringclasses 125
values | categories
sequencelengths 1
6
| text
stringlengths 12
495k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2409.02506v1 | 20240904080611 | The exact lower bound of CNOT-complexity for fault-tolerant quantum Fourier transform | [
"Qiqing Xia",
"Huiqin Xie",
"Li Yang"
] | quant-ph | [
"quant-ph"
] |
A background-estimation technique for the detection of extended gamma-ray structures with IACTs
T. Wach 1 A. Mitchell 1 L. Mohrmann 2
September 9, 2024
===============================================================================================
§ ABSTRACT
The quantum Fourier transform (QFT) is a crucial subroutine in many quantum algorithms. In this paper, we study the exact lower bound problem of CNOT gate complexity for fault-tolerant QFT. First, we consider approximating the ancilla-free controlled-R_k in the QFT logical circuit with a standard set of universal gates, aiming to minimize the number of T gates. Various single-qubit gates are generated in addition to CNOT gates when the controlled-R_k is decomposed in different ways, we propose an algorithm that combines numerical and analytical methods to exactly compute the minimum T gate count for approximating any single-qubit gate with any given accuracy. Afterwards, we prove that the exact lower bound problem of T gate complexity for the QFT is NP-complete. Furthermore, we provide the transversal implementation of universal quantum gates and prove that it has the minimum number of CNOT gates and analyze the minimum CNOT count for transversally implementing the T gate. We then exactly compute the exact lower bound of CNOT gate complexity for fault-tolerant QFT with the current maximum fault-tolerant accuracy 10^−2. Our work can provide a reference for designing algorithms based on active defense in a quantum setting.
§ INTRODUCTION
A lower bound exists on the time for quantum gates due to physical limitations, making it necessary to study the optimal implementation of quantum circuits. Especially, the quantum Fourier transform (QFT) <cit.> is a key subroutine in many quantum algorithms, such as Shor's algorithm for solving integer factorization and discrete logarithm problems <cit.>, quantum amplitude estimation <cit.>, phase estimation <cit.>, solving systems of linear equations <cit.>, quantum arithmetic <cit.> and various fast quantum algorithms for solving hidden subgroup problems <cit.>. The efficient execution of the QFT on quantum computers is crucial for the success of algorithms. One of its significant applications is to break the cryptosystems such as RSA <cit.> and ElGamal <cit.>. One of the most feasible candidates for these large-scale quantum computations is fault-tolerant quantum computation (FTQC) <cit.>. FTQC must further rely on the Clifford gates and T gates, and any single-qubit gate can be approximated to an arbitrary accuracy using a circuit composed of these gates. However, the T gate requires ancillary states and CNOT gates for transversal implementation, which is relatively expensive <cit.>. The number of T gates can serve as a first-order approximation of the resources for physically implementing a quantum circuit.
Currently, there are two main methods for optimizing the number of T gates in QFT circuits. One method is to define an approximate version of the QFT (AQFT) <cit.> by removing all rotation gates with angles below a certain threshold, thereby optimizing the number of T gates for the AQFT <cit.>. The other is to decompose the controlled-R_k gates in the QFT circuit and use gate synthesis or state distillation methods to approximate the single-qubit gates. In <cit.>, this work compares the resources for gate synthesis and state distillation, concluding that gate synthesis is a better method for implementing the fault-tolerant QFT. The optimal method for gate synthesis is provided by Solovay-Kitaev algorithm <cit.>, and the number of T gates for approximating any single-qubit gate is 𝒪(log^3.97/ϵ). Based on this, <cit.> uses a numerical method to find the fault-tolerant approximation properties of the phase rotation gates in exponential time. The optimization methods for the number of T gates required to approximate any single-qubit gate include probabilistic algorithms with feedback <cit.>, efficient random algorithms <cit.>, using Repeat-Until-Success circuits <cit.>, using auxiliary qubits <cit.>, efficient approximation algorithms achieving accuracy 10^-15 <cit.>, rounding off a unitary to a unitary over the ring 𝒵[i,1/√(2)] <cit.>, and probabilistic approximation for Z-rotation gates based on the gridsynth algorithm <cit.>.
These related works focus on optimizing the number of logical T gates associated with the QFT using approximation algorithms. However, the specific implementation of a quantum computer is influenced by physical factors, as it is fundamentally a physical system. In FTQC, the CNOT gate is regarded as the main cause of quantum errors <cit.> and has a longer execution time than single-qubit gates. The computational complexity of quantum algorithms can be reduced to the number of CNOT gates <cit.>, and minimizing the number of CNOT gates is typically regarded as a secondary optimization objective, such as optimizing the number of CNOT gates in the quantum circuit for Shor's algorithm <cit.>. However, the problem of CNOT-optimal quantum circuit synthesis over gate sets consisting of CNOT and phase gates is NP-complete <cit.>. In some cases, optimizations aimed at reducing the number of T gates can result in a significant increase in the number of CNOT gates. Fortunately, we have found a method to make up for the explosion of CNOT gates with cost savings by optimizing T gates. Proving the ancilla-assisted gate synthesis with the minimum number of CNOT gates is challenging, while it is provable for the ancilla-free gate synthesis. Thus, in this way, we can obtain the exact lower bound of CNOT gate complexity for fault-tolerant QFT, which is different from previous works. To our knowledge, there has not been such an analysis before. This analysis is of great significance and can provide a reference for active defense in a quantum setting.
Our contributions
In this paper, we consider the logical resources and fault-tolerance resources of the QFT. The logical resources are measured by the number of T gates, which is called the T-count. Since the fault-tolerant implementation of the T gate relies on CNOT gates, the fault-tolerance resources are measured by the number of CNOT gates, which is called the CNOT-count. In more detail, our main contributions are as follows:
* First, we find an ancilla-free gate synthesis method for a controlled-R_k gate with the minimum T-count and we have provided the corresponding proof. Interestingly, this method also achieves the minimum fault-tolerant CNOT-count. When the controlled-R_k is decomposed in different ways, it generates different single-qubit gates in addition to CNOT gates. Since current quantum computers cannot implement all single-qubit gates, as a generalization we propose an algorithm to exactly compute the minimum T-count for approximating any single-qubit gate with any given accuracy by using Hadamard gates and T gates. This algorithm uses analytical methods to avoid traversing all quantum states in the normalized space and uses numerical methods to determine whether the error conditions are satisfied. Afterwards, we prove that the exact lower bound problem of the T-count for the QFT is at least as hard as the K-SAT problem and it is an NP-complete problem.
* Furthermore, we provide a transversal implementation of Z-rotation gates satisfying certain conditions with the minimum CNOT-count and analyze the minimum CNOT-count for transversally implementing the T gate. We then compute the exact lower bound of CNOT gate complexity for fault-tolerant QFT with different input lengths at the current maximum fault-tolerant accuracy 10^-2 <cit.>. In particular, we estimate the lower bound of the effective execution time of the QFT based on Steane code on ion trap computers, which can provide a reference for quantum computation based on the QFT. Finally, we discuss that determining the best possible value of c in 𝒪(log^c(1/ϵ)) implied by the Solovay–Kitaev theorem is at least NP-hard, and the circuit optimization problem is at least QMA-hard.
§ PRELIMINARIES
§.§ Common quantum gates
We briefly recall Clifford and T quantum gates, including universal gates and Pauli-X (Y, Z) gates. Their circuit symbols and matrix representations are shown in Figure <ref>.
Here, {H, S, T, CNOT} is called the standard set of universal gates. When the Pauli matrices are exponentiated, they will produce three types of useful unitary matrices, namely, the rotation operators around the x̂, ŷ and ẑ axes, defined as follows:
R_x(θ)≡ e^-iθ X/2=cos θ/2I-isin θ/2X=[ cos θ/2 -isin θ/2; -isin θ/2 cos θ/2 ],
R_y(θ)≡ e^-iθ Y/2=cos θ/2I-isin θ/2Y=[ cos θ/2 -sin θ/2; sin θ/2 cos θ/2 ],
R_z(θ)≡ e^-iθ Z/2=cos θ/2I-isin θ/2Z=[ e^-iθ/2 0; 0 e^iθ/2 ],
if rotating by an angle θ around the axis n̂=(n_x,n_y,n_z), then the rotation operator can be denoted as
R_n̂(θ)=cosθ/2I-isinθ/2(n_xX+n_yY+n_zZ),
these rotation operators play a crucial role in decomposing controlled unitary operators.
§.§ Approximating controlled single-qubit unitary operators
We first introduce a lemma on the decomposition of a single-qubit unitary operator and a theorem on the decomposition of its controlled operator, as proven in <cit.>:
Suppose U is a unitary operator on a single qubit, then there exist real numbers α,β,γ and δ∈ [0,2π), such that
U=e^iαR_z(β)R_y(γ)R_z(δ) =[ e^i(α-β/2-δ/2)cosγ/2 -e^i(α-β/2+δ/2)sinγ/2; e^i(α+β/2-δ/2)sinγ/2 e^i(α+β/2+δ/2)cosγ/2 ].
It can be extended to a more general case: suppose m̂ and n̂ are non-parallel real unit vectors in the three-dimensional space, then U can be written as
U=e^iαR_n̂(β)R_m̂(γ)R_n̂(δ),
for appropriate choices of α,β,γ and δ.
For any controlled single-qubit unitary operator controlled-U, up to a global phase, it can be decomposed into two CNOT gates and three single-qubit unitary operators, the theorem is as follows:
Suppose U is a unitary gate on a single qubit, then there exist single-qubit unitary operators P≡ e^iα/2R_z(α), A≡ R_z(β)R_y(γ/2),B≡ R_y(-γ/2)R_z(-(δ+β)/2),C≡ R_z((δ-β)/2), such that ABC=I and U=e^iαAXBXC, where α is a global phase factor. Then, the controlled-U operation is C(U)=(P⊗ A) · CNOT · (I⊗ B) · CNOT · (I⊗ C). The circuit implementation is shown in Figure <ref>.
The set of universal gates {H, S, T, CNOT} is discrete, while the set of unitary operations is continuous. Approximating any unitary operation using this discrete set will inevitably introduce errors, as defined in <cit.>:
Let U and V be two unitary operators on the same state space, where U is the desired target unitary operator and V is the unitary operator actually implemented. Define the error when V is implemented instead of U as
E(U,V)=max_|ψ⟩||(U-V)|ψ⟩||
where the maximum takes all normalized quantum states |ψ⟩ in the state space.
The Solovay–Kitaev theorem <cit.> is one of the most important fundamental results in the theory of quantum computation. It shows that for any single-qubit gate U and given any ϵ>0, U can be approximated to an accuracy ϵ with 𝒪(log^c(1/ϵ)) finite gates.
Let SU(2) be the set of all single-qubit unitary matrices with determinant 1, and 𝒢 be a finite set of elements in SU(2) including its own inverses, used to simulate other single-qubit gates. Let g_1⋯ g_l (g_i∈𝒢, i=1,⋯,l) be a word of length l from 𝒢, 𝒢_l be the set of all words of length at most l, and ⟨𝒢⟩ be the set of all words with finite length. If ⟨𝒢⟩ is dense in SU(2), then 𝒢_l is an ϵ-net in SU(2) for l=𝒪(log^c(1/ϵ)), where c ≈ 2 when using the measure E(·,·), or c ≈ 4 when using the trace distance D(·,·), as D(·,·)=2E(·,·). In our work, we use the measure E(·,·) as show in Definition <ref>.
§.§ The quantum Fourier transform
The QFT is a linear operator on an orthonormal basis |0⟩,⋯,|N-1⟩, and its action on the basis states is
|j⟩1/√(N)∑_k=0^N-1e^2π ijk/N|k⟩.
Equivalently, the action on an arbitrary state can be written
∑_j=0^N-1x_j|j⟩1/√(N)∑_k=0^N-1y_k|k⟩,
where y_k is the discrete Fourier transform of the amplitudes x_j, y_k=∑_j=0^N-1x_je^2π ijk/N.
The QFT is a unitary transformation, and thus it can be implemented as a dynamic process on a quantum computer as follows:
1
Useful product representation of the QFT
* First, suppose N=2^n, and represent j in binary as j_1j_2⋯ j_n, then j=j_12^n-1+j_22^n-2+⋯+j_n.
* After performing the QFT on the basis state |j⟩, then we can obtain the useful product representation:
|j_1,⋯,j_n⟩1/2^n/2(|0⟩+e^2π i0.j_n|1⟩)(|0⟩+e^2π i0.j_n-1j_n|1⟩)⋯(|0⟩+e^2π i0.j_1j_2⋯ j_n|1⟩),
where 0.j_ℓj_ℓ+1⋯ j_m=j_ℓ/2+j_ℓ+1/2^2+⋯+j_m/2^m-ℓ+1 is a binary fraction.
* Each tensor product component can be implemented by a controlled single-qubit unitary operation (controlled-R_k). Here, the single-qubit gate R_k denotes the unitary transformation
R_k=[ 1 0; 0 e^2π i/2^k ].
According to the useful product representation of the QFT in Eq. (<ref>), we introduce the construction of effective quantum circuits for the QFT <cit.>, as shown in Figure <ref>.
As can be seen from Figure <ref>, the efficient quantum circuit for the QFT requires n Hadamard gates, n(n-1)/2 controlled-R_k gates, and ⌊n/2⌋ swap gates.
§ THE EXACT LOWER BOUND PROBLEM OF T-COUNT FOR THE QFT
Although the standard QFT circuit is typically implemented in a specific way, as shown in Figure <ref>, the QFT circuit is not unique and can achieve the same function through equivalent circuit transformations. Therefore, all QFT circuits must be equivalent to the circuit shown in Figure <ref>, and thus we can aim to analyze the exact lower bound of the T-count for the QFT based on the circuit in Figure <ref>.
§.§ Algorithm for approximating the ancilla-free controlled-R_k with the minimum T-count
Since current quantum computers cannot implement all single-qubit gates, the ancilla-free gate synthesis approximates single-qubit gates with the Hadamard gate and T gate. To approximate the controlled-R_k operation, the ancilla-free controlled-R_k can be decomposed into single-qubit gates in addition to CNOT gates. However, when different decomposition methods are used, the single-qubit gates decomposed from the controlled-R_k are also different. To approximate different single-qubit gates, we introduce a theorem from <cit.>, for which we have revised the relevant content. The theorem states that any single-qubit unitary operation can be approximated to arbitrary accuracy using the Hadamard gate and T gate, as follows:
Up to an unimportant global phase, T has the same effect as R_z(π/4), and HTH has the same effect as R_x(π/4). Combining these two operations, according to Eq. (<ref>), we obtain
T(HTH) =e^iπ/4R_z(π/4)R_x(π/4)
=e^iπ/4(cosπ/8I-isinπ/8Z)(cosπ/8I-isinπ/8X)
=e^iπ/4((cosπ/8)^2I-isinπ/8(cosπ/8(X+Z)+sinπ/8Y))
=e^iπ/4[ 1+e^-iπ/4/2 e^-iπ/4-1/2; 1-e^iπ/4/2 1+e^iπ/4/2 ].
Using only the Hadamard gate and T gate, the rotation operator R_n̂(θ)=R_z(π/4)R_x(π/4) can be constructed to approximate any rotation operator around the n̂ axe, where cosθ/2=(cosπ/8)^2, n̂=sinπ/8/sinθ/2(cosπ/8,sinπ/8,cosπ/8). Let R_m̂(θ))=HR_n̂(θ)H, m̂=sinπ/8/sinθ/2(cosπ/8,-sinπ/8,cosπ/8), n̂ and m̂ are real unit vectors that are not parallel in the three-dimensional space. According to Eq. (<ref>), there exist appropriate positive integers n_1,n_2,n_3,
E(U,R_n̂(θ))^n_1HR_n̂(θ)^n_2HR_n̂(θ)^n_3)<ϵ,
where ϵ is the desired accuracy. For any given single-qubit operator U and any ϵ>0, a quantum circuit consisting only of Hadamard gates and T gates can approximate U within ϵ.
Based on Theorem <ref>, we propose an algorithm for exactly computing the minimum resources required to approximate any single-qubit unitary operator U with a given accuracy using V=R_n̂(θ)^n_1HR_n̂(θ)^n_2H R_n̂(θ)^n_3. The algorithm is as follows:
In Algorithm <ref>, the minimum resources are measured by the minimum of n_1+n_2+n_3. The minimum T-count is twice min n_sum, Since R_n̂(θ)=THTH,R_m̂(θ)=HTHT.
Since a computer is a discrete computing device, it is impossible to traverse all |ψ⟩ in the normalized space. To avoid this problem and ensure that our algorithm is exact, we combine numerical and analytical methods to perform a mathematical transformation in step 9 of Algorithm <ref>, transforming this traversal problem into an extremum determination problem. The analytical method can avoid traversing all quantum states in the normalized space, and the numerical method can be used to determine whether the error conditions are satisfied. When the accuracy of the extremum computed by computers is much smaller than ϵ, step 10 will not result in any misjudgment. The transformation is as follows:
1
Mathematical computation in step 9 of Algorithm <ref>
* Let |ψ⟩=cosω/2|0⟩+e^iφsinω/2|1⟩, ω∈ [0,π],φ∈ [0,2π), then |ψ⟩=[ cosω/2; e^iφsinω/2 ].
* Let f(ω, φ)=||(U-V)|ψ⟩||, where f(ω, φ) is a continuous function over the domain. Then, compute the norm of the two-dimensional vector (U-V)|ψ⟩ for each set of n_1,n_2,n_3.
* Compute the partial derivatives ∂ f/∂ω and ∂ f/∂φ of the function f(ω, φ) with respect to ω and φ such that ∂ f/∂ω=0 and ∂ f/∂φ=0.
* If ∂ f/∂ω=0 has no solution, then ∂ f/∂ω>0 (or<0) always holds, and the local maximum point is the boundary point (π,φ_0) (or (0,φ_0)).
* If ∂ f/∂φ=0 has no solution, then ∂ f/∂φ>0 (or<0) always holds, and f(ω, 0)≠ f(ω, 2π), which contradicts f(ω, 0)=f(ω, 2π)! Therefore, ∂ f/∂φ=0 must have a solution.
* If both ∂ f/∂ω=0 and ∂ f/∂φ=0 have solutions, then the possible extremum points (ω_0,φ_0) are found.
* Compute the second-order partial derivatives of the function f at (ω_0,φ_0) and construct the Hessian matrix D as follows:
D=[ ∂^2 f/∂ω^2 ∂^2 f/∂ω∂φ; ∂^2 f/∂φ∂ω ∂^2 f/∂φ^2 ].
* Compute the determinant (D) of the Hessian matrix D and determine the extremum point (ω_0,φ_0):
* If (D)>0 and ∂^2 f/∂ω^2<0, then (ω_0,φ_0) is a local maximum point.
* If (D)>0 and ∂^2 f/∂ω^2>0, then (ω_0,φ_0) is a local minimum point.
* If (D)<0, then (ω_0,φ_0) is not an extremum point.
* If (D)=0, then (ω_0,φ_0) may be a local maximum or minimum point.
* By comparing the function values at all possible local extremum points and boundary points, determine the global maximum E(U,V)=max (f(ω,φ)).
We provide a lemma concerning the property of Algorithm <ref> as follows:
Given a unitary operator U and a positive integer N_0≥ 3, if the minimum resources can be found within the search space 𝒪(N_0^3), then the execution process of Algorithm <ref> can be regarded as a function h(ϵ) related to the accuracy ϵ>0. h(ϵ) is a monotonically decreasing function, and h(ϵ)≥ 3 always holds. In particular, h(ϵ)=0 if and only if ϵ=0, and U is a matrix representation of a combination of the Clifford and T gates, or the identity matrix.
Let ϵ_2>ϵ_1>0, if E≤ϵ_1, then the minimum resources satisfying the condition is h(ϵ_1), which must satisfy the condition E≤ϵ_2. Thus, h(ϵ_1)≥ h(ϵ_2). Conversely, if E≤ϵ_2, then the minimum resources satisfying the condition is h(ϵ_2), which does not necessarily satisfy the condition E≤ϵ_1. We need to continue to perform Algorithm <ref>, which ensures that h(ϵ_1)>h(ϵ_2). Therefore, h(ϵ_1)≥ h(ϵ_2), we conclude that h(ϵ) is a monotonically decreasing function.
According to Theorem <ref>, n1,n2,n3 are all positive integers, h(ϵ)≥ 3 always holds. When h(ϵ)=0, it means that U does not need to be approximated, then ϵ=0. Thus, U is a matrix representation of a combination of the Clifford gate and the T gate, or the identity matrix.
Based on Lemma <ref>, we propose a property theorem about approximating the decomposition of unitary operators around the ẑ axis to analyze the minimum resources, as follows:
R_Z(β_0) is a known rotation operator, and the sum of the resources for approximating its decomposition into n+1 rotation operators R_Z(β_0-β_1-⋯-β_n)R_Z(β_1)⋯ R_Z(β_n) is minimum if and only if β_1=⋯=β_n=0.
For any given set of β_1,⋯,β_n and accuracy ϵ, we consider the following cases:
Case 1: R_Z(β_0)=R_Z(β_0-β_1-⋯-β_n)R_Z(β_1)⋯ R_Z(β_n), let E(R_Z(β_0),V)<ϵ,E(R_Z(β_0-β_1-⋯-β_n),V_0)<ϵ_0,E(R_Z(β_1),V_1)<ϵ_1,⋯,E(R_Z(β_n),V_n)<ϵ_n,ϵ_0+ϵ_1+⋯+ϵ_n=ϵ, corresponding to the functions h(ϵ),h_0(ϵ_0),h_1(ϵ_1),⋯ h_n(ϵ_n), respectively. Here, V,V_0,⋯,V_n are the actual approximation operators of R_Z(β_0), R_Z(β_0-β_1-⋯-β_n),R_Z(β_1),⋯,R_Z(β_n) respectively.
Case 2: R_Z(β_0-β_1-⋯-β_n)=R_Z(β_0)R_Z(-β_1)⋯ R_Z(-β_n), let E(R_Z(β_0-β_1-⋯-β_n),V_0')<ϵ,E(R_Z(β_0),V')<ϵ',E(R_Z(-β_1),V_1')<ϵ_1',⋯, E(R_Z(-β_n),V_n')<ϵ_n',ϵ'+ϵ_1'+⋯+ϵ_n'=ϵ, corresponding to the functions h_0(ϵ),h(ϵ'),h_1'(ϵ_1'),⋯,h_n'(ϵ_n'), respectively. Here, V',V_0',⋯,V_n' are the actual approximation operators of R_Z(β_0), R_Z(β_0-β_1-⋯-β_n),R_Z(-β_1),⋯,R_Z(-β_n) respectively.
Case 1 and Case 2 can be transformed into each other and are equivalent, meaning that a known rotation operator around the ẑ axis can be decomposed into a set of rotation operators around the ẑ axis. Thus, h(ϵ)≤ h_0(ϵ_0)+h_1(ϵ_1)+⋯ +h_n(ϵ_n) for Case 1, and h_0(ϵ) ≤ h(ϵ')+h_1'(ϵ_1')+⋯ +h_n'(ϵ_n') for Case 2; or h(ϵ)≥ h_0(ϵ_0)+h_1(ϵ_1)+⋯ +h_n(ϵ_n) for Case 1, and h_0(ϵ) ≥ h(ϵ')+h_1'(ϵ_1')+⋯ +h_n'(ϵ_n') for Case 2. Here, h, h_0,⋯,h_n, h_1' ⋯,h_n' depend only on rotation operators. V, V_0,⋯, V_n, V', V_0' ⋯,V_n' depend on rotation operators and accuracy. Additionally, h(ϵ) is a constant for different sets of β_1,⋯,β_n.
Now using the proof by contradiction, suppose h_0(ϵ) ≥ h(ϵ')+h_1'(ϵ_1')+⋯ +h_n'(ϵ_n'), then h_0(ϵ)≥ h(ϵ)+h_1'(ϵ)+⋯ +h_n'(ϵ) according to Lemma <ref>. Consequently, we have h_0(ϵ_0)+h_1(ϵ_1)+⋯ +h_n(ϵ_n) ≥ h_0(ϵ)+h_1(ϵ)+⋯ +h_n(ϵ) ≥ h(ϵ)+h_1'(ϵ)+⋯ +h_n'(ϵ)+h_1(ϵ)+⋯ +h_n(ϵ) ≥ h(ϵ), contradiction! In conclusion, h_0(ϵ) ≤ h(ϵ')+h_1'(ϵ_1')+⋯ +h_n'(ϵ_n') and h(ϵ)≤ h_0(ϵ_0)+h_1(ϵ_1)+⋯ +h_n(ϵ_n). That is, the resources for directly approximating R_Z(β_0) are minimum, achieving h(ϵ), if and only if β_1=⋯=β_n=0.
§.§ Complexity analysis
We now focus on the exact lower bound problem of the T-count for the QFT. We propose the following theorem, which states that a controlled single-qubit unitary operator satisfying certain conditions can be decomposed into one CNOT gate and two single-qubit gates that are conjugate transposes of each other.
A unitary operator controlled-(UX) can be decomposed into one CNOT gate and two single-qubit gates that are conjugate transposes of each other, if and only if U=e^iαR_z(β)R_y(γ)R_z(β).
Necessary condition: UX can written as e^iα'/2A'X(A')^†, then U=e^iα'/2A'X(A')^†X. According to Theorem <ref> and Eq. (<ref>), up to a global phase, A'≡ R_z(β)R_y(γ/2), A'^†≡ R_y(-γ/2)R_z(-β). Therefore, U=e^iαR_z(β)R_y(γ)R_z(β).
Sufficient condition:
U=e^iαR_z(β)R_y(γ)R_z(β), according to Eq. (<ref>), then δ=β. According to Theorem <ref>, let A=R_z(β)R_y(γ/2), B=R_y(-γ/2)R_z(-β), C=I, then UX=e^iαAXB, where B=A^†. Therefore, controlled-(UX) can be decomposed into one CNOT gate and two single-qubit gates that are mutually conjugate transpose.
The method for decomposing the ancilla-free controlled-R_k, as shown in Figure <ref>, requires the minimum T-count when A=I or C=I. The minimum T-count are the sum of those for approximating R_z(-π /2^k) and R_z(π /2^k).
Suppose that there exists a decomposition method, as shown in Figure <ref>. Here, U=R_k, r is the number of CNOT gates, and P is the phase shift gate.
When r is an odd number, let U_3=⋯=U_r+1=I, then R_k=U_1XU_2X. Therefore, R_k=U'X can be decomposed into one CNOT and two single-qubit gates U_1,U_2 that are mutually conjugate transpose, where U'=U_1XU_2. Combining Eq. (<ref>) and Eq. (<ref>), we can obtain
R_k=[ 1 0; 0 e^2π i/2^k ]=[ e^i(α-β/2-δ/2)cosγ/2 -e^i(α-β/2+δ/2)sinγ/2; e^i(α+β/2-δ/2)sinγ/2 e^i(α+β/2+δ/2)cosγ/2 ].
Thus, we have α=π /2^k, β+δ=π/2^k-1, γ=0, then R_k=e^iπ/2^kR_z(π/2^k-1). According to Theorem <ref>, R_kX=e^iπ/2^kR_z(π/2^k-1)R_y(π)R_z(-π) is satisfied if and only if k=1, i.e., R_k=Z.
Therefore, r can only be an even number for k=2,⋯,n. U_1,⋯,U_r+1 are the operators R_z(θ_1),⋯, R_z(θ_r+1) that rotate around the ẑ axis, then
{ U_1U_2⋯ U_r+1=I ⟺θ_1+θ_2+⋯+θ_r+θ_r+1=0,
U_1XU_2X⋯ XU_rXU_r+1=R_z(β+δ) ⟺θ_1-θ_2+⋯-θ_r+θ_r+1=β+δ,
.
thus, θ_1+θ_3+⋯+θ_r+1=β+δ/2, θ_2+θ_4+⋯+θ_r=-β+δ/2. According to Theorem <ref>, if and only if there exist a certain θ_i=β+δ/2 for i=2ℓ-1, where ℓ=1,⋯,r/2+1, a certain θ_j=-β+δ/2 for j=2ℓ, where ℓ=1,⋯,r/2, the resources reach their minimum. At this point, r=2, this decomposition method corresponds to the one shown in Figure <ref>, where A=I or C=I. From Eq. (<ref>), A_k=R_z(β), B_k=R_z(-π /2^k), C_k=R_z(π /2^k-β), β∈[0,2π) for each k=2,⋯,n. Let β=0, or π/2^k, A_K=I,C_k=R_z(π /2^k), or C_K=I,A_k=R_z(π /2^k), then the resources reach their minimum, i.e., the sum of those for approximating R_z(-π /2^k) and R_z(π /2^k).
When k is fixed (the controlled-R_k is determined), then B_k is determined, while A_k,C_k are not unique and are related to β. To obtain the minimum resources, it is theoretically necessary to traverse β∈ (0,2π) and then compute the sum of the minimum resources min n_(A_k+C_k) for approximating A_k,C_k.
However, it is interesting that according to Proposition <ref>, the minimum sum of the resources for approximating A_k,C_k and B_k is equivalent to using Algorithm <ref> to compute the sum of those for approximating R_z(π /2^k) and R_z(-π /2^k). According to Remark <ref>, it follows that the minimum T-count is twice their resources. The exact lower bound problem of the T-count for the QFT can be considered as the problem of
determining an integer N_0 such that the minimum T-count for approximating R_z(π /2^k) or R_z(-π/2^k) with given accuracy ϵ can be found in the search space 𝒪(N_0^3), for all k=3,⋯,n. We call the problem ELBP_T-count.
To show the NP-completeness of ELBP_T-count, we recall the K-SAT problem ((K≥ 3)) <cit.>. Given a finite set of boolean variables X={x_1,x_2,⋯,x_n},|X|=n, a set of clauses C={C_1,C_2,⋯,C_m},|C|=m,C=C_1 C_2 ⋯ C_m, each C_i is a disjunctive normal form consisting of K variables. Consider whether there exists a truth value assignment for a set of Boolean variables such that C is true. It is known that the K-SAT problem is a class of NP-complete problems <cit.>, hence we can use a reduction from K-SAT problem to prove NP-completeness of ELBP_T-count. We propose the following theorem:
ELBP_T-count is NP-complete.
Given an integer N_0≥ 3, the search space is ∑_n_sum=3^N_0∑_n_1=1^n_sum-2(n_sum-n_1-1)=1/6N_0(N_0-1)(N_0-2)=𝒪(N_0^3) according to steps 4-6 in Algorithm <ref>. As shown in Theorem <ref>, N_0=𝒪(log^c(1/ϵ)), so the search space is logarithmic polynomial.
Given an accuracy ϵ and a positive integer N_1=𝒪(log^c(1/ϵ)), let K=1/6N_1(N_1-1)(N_1-2)=𝒪(N_1^3). An instance of the K-SAT problem is (ℓ_31ℓ_32⋯ℓ_3K) (ℓ_31ℓ_32⋯ℓ_3K)⋯ (ℓ_n1ℓ_n2⋯ℓ_nK) (ℓ_n1ℓ_n2⋯ℓ_nK), we construct an instance of ELBP_T-count, where a finite set of boolean variables L={ℓ_31,ℓ_32,⋯,ℓ_3K,⋯, ℓ_n1,ℓ_n2,⋯,ℓ_nK}, |L|=(n-2)K, a set of clauses C={C_31,C_32,C_41,C_42,⋯,C_n1,C_n2}, |C|=2(n-2), C=C_31 C_32 C_41 C_42⋯ C_n1 C_n2, each C_k1 or C_k2, k=3,⋯,n is a disjunctive normal form consisting of K variables. Here, means taking the minimum value "min", and means taking the maximum value "max".
If the variable ℓ_kj or ℓ_kj is true for k=3,⋯,n,j=1,⋯,K, then the error E_kj is recorded, otherwise it is assigned ∞ and recorded, where E_kj for a certain k is the error for a certain R_z(π /2^k) or R_z(-π /2^k) and differnet j corresponds to different V in the search space 𝒪(N_1^3) using Algorithm <ref>. Let ϵ'=max (E_31,E_32,⋯,E_3K,⋯,E_n1,E_n2,⋯,E_nK). If each clause is true, then at least one variable in each clause is true and ϵ' ≠∞; Otherwise, at least one clause is false, and then ϵ'=∞. Therefore, the boolean formula is satisfiable if and only if ϵ' ≠∞. This means that the minimum T-count for approximating R_z(π /2^k) or R_z(-π/2^k) with the accuracy ϵ' can be found in the search space 𝒪(N_1^3) using Algorithm <ref>, for all k=3,⋯,n. Determine N_0=N_1 when ϵ' < ϵ, then
the minimum T-count for approximating R_z(π /2^k) or R_z(-π/2^k) can be found in the search space 𝒪(N_0^3); otherwise, it cannot.
Clearly ELBP_T-count is in NP, as the mathematical computation in step 9 of Algorithm <ref> is polynomial-time computable when given an N_0, and hence step 10 can be efficiently verified. It suffices to show NP-hardness by reducing the K-SAT problem to ELBP_T-count, and hence ELBP_T-count is NP-complete.
§ THE EXACT LOWER BOUND OF CNOT-COUNT FOR THE FAULT-TOLERANT QFT
From section 3, we known that ELBP_T-count is an NP-complete problem and the fault-tolerant CNOT-count is positively correlated with the T-count. Therefore, the exact lower bound problem of CNOT-count for the fault-tolerant QFT is also NP-complete. Nevertheless, we can still compute its exact lower bound under partial fault-tolerant accuracy. We now turn to the transversal implementation of universal quantum gates with the minimum CNOT-count.
§.§ Universal fault-tolerant quantum gates with the minimum CNOT-count
The universal gates can be implemented transversely without requiring a fault-tolerant construction in FTQC except for the T gate equivalent to the rotation operator R_z(π/4), so we need to consider the fault-tolerant construction of Z-rotation gates. Similar to the fault-tolerant construction of the T gate in <cit.>, we provide a general fault-tolerant construction for U=e^iθ/2R_z(θ) that cannot be implemented transversely without requiring a fault-tolerant construction, as shown in Figure <ref>.
Here, the auxiliary state |Θ⟩ is a +1 eigenstate of the operator UXU^†=R_z(2θ)X.
Let M=e^iθR_z(2θ), we observe the single-qubit operator MX with eigenvalue ± 1, and its fault-tolerant measurement is shown in Figure <ref>.
Here, P is a phase shift gate, P=e^-iθ/2R_z(-θ), and M' is the fault-tolerant operator that can be implemented transversely for M without requiring a fault-tolerant construction. The framed parts are the preparation, verification, and final decoding of the cat state |Cat⟩=1/√(2)(|0_C⟩+|1_C⟩). If the cat state is successfully prepared, then
|Cat⟩|0_L⟩ 1/√(2)(|0_C⟩|0_L⟩+e^2iθ|1_C⟩|1_L⟩)
1/√(2)(|0_C⟩|0_L⟩+e^iθ|1_C⟩|1_L⟩)
= 1/√(2)( |0_C⟩+|1_C⟩/√(2)|0_L⟩+e^iθ|1_L⟩/√(2)+ |0_C⟩-|1_C⟩/√(2)|0_L⟩-e^iθ|1_L⟩/√(2))
1/√(2)( |0⟩|0_L⟩+e^iθ|1_L⟩/√(2)+ |1⟩|0_L⟩-e^iθ|1_L⟩/√(2)),
where |0_L⟩ and |1_L⟩ denote the encoding states of logical |0⟩ and |1⟩, respectively.
We use the above fault-tolerant measurement method to prepare the auxiliary state |Θ⟩. If the measurement result is +1, it can be considered to have been prepared correctly; if it is -1, a fault-tolerant Z operation needs to be applied to change the state.
Next, we propose a proposition regarding the minimum CNOT-count used in the general fault-tolerant construction in Figure <ref> and Figure <ref>. The proposition is as follows:
The fault-tolerant construction of the single-qubit gate U rotating around the ẑ axis, as shown in Figure <ref>, requires the minimum CNOT-count. The preparation of the auxiliary state, as shown in Figure <ref>, also requires the minimum CNOT-count.
For U=e^iθ/2R_z(θ) that cannot be implemented transversally without requiring a fault-tolerant construction, auxiliary qubits are required. We use only m CNOT gates for transversal implementation to swap the auxiliary qubit |0⟩ with the data qubit |ψ⟩ when one logical qubit is encoded into m physical qubits, as shown in Figure <ref>.
Here, it uses the minimum CNOT-count to implement the swap operation, since it is impossible to implement the interaction between two qubits using only single-qubit gates.
The swap operation in Figure <ref> only implements the exchange of data qubits to auxiliary qubits by measuring the original data qubits. In general, at least three CNOT gates are required when implementing the swap operation between two arbitrary quantum states, as shown in Figure <ref>.
Applying the relations UXU^†=R_z(2θ)X and (U⊗ I) · CNOT=CNOT · (U⊗ I), we can obtain the fault-tolerant construction of U in Figure <ref>.
Let |ψ⟩=a|0⟩+b|1⟩, then perform a fault-tolerant CNOT operation, giving
1/√(2)[|0⟩(a|0⟩+b|1⟩)+e^iθ|1⟩(a|1⟩+b|0⟩)] =1/√(2)[(a|0⟩+be^iθ|1⟩)|0⟩+(b|0⟩+ae^iθ|1⟩)|1⟩].
Finally, measure the second qubit. If the result is 0, the process is complete; otherwise, apply the UXU^† operation to the remaining qubits.
In Figure <ref>, the preparation of the auxiliary state must rely on the auxiliary qubits, and the preparation of the cat state has already reached the minimum CNOT-count using m-1 CNOT gates. According to Theorem <ref>, the fault-tolerant controlled-M'X can be decomposed into m CNOT gates for transverse implementation and other single-qubit gates, which has reached the minimum CNOT-count. Therefore, we use the minimum CNOT-count to make U transverse implementation.
From the above analysis, controlled-M must be implemented transversely without requiring a fault-tolerant construction in theory, where M=e^iθR_z(2θ). For R_k=e^iπ/2^kR_z(π/2^k-1), it is clear that for θ=π/2^k-1 with k > 3, this condition is not satisfied. In particular, when θ=π/4 with k = 3, then U=T,M=S,M'=ZS. According to Theorem <ref>, the controlled-M'X can be decomposed into one CNOT gate and two single-qubit gates A=R_z(3/4π)=e^-3iπ/8TS and B=R_z(-3/4π)=e^3iπ/8S^†T^†, where T^†=T^7, S^†=S^3. Fortunately, the standard set of universal gates {H, S, T, CNOT} can construct all physical quantum gates.
Therefore, combining Figure <ref> and Figure <ref>, at least 4m fault-tolerant CNOT gates are required to transversely implement the fault-tolerant T gate (without considering the encoding circuits).
§.§ Result analysis
We set the accuracy ϵ to the current maximum fault-tolerant accuracy 10^-2 <cit.> in FTQC and compute the minimum resources min n_sum for approximating R_z(π /2^k) and R_z(-π /2^k) for different values of k using Algorithm <ref>. According to Remark <ref>, the minimum T-count is 2min n_sum. The minimum T-count for approximating R_z(π /2^k) and R_z(-π /2^k) for different k are shown in Table <ref>.
In Table <ref>, when k≥ 9, the minimum T-count remains unchanged at 1972. This is because as k increases, R_z(π /2^k) and R_z(-π /2^k) gradually approach the identity matrix and become insufficiently sensitive to this accuracy. When k=2, the controlled-R_2 is the controlled-S. According to Theorem <ref>, it can be decomposed into one T gate, one T^† gate, and two CNOT gates. The T^† can be approximated using Algorithm <ref> and the minimum resources min n_sum are 206 with ϵ=10^-2, i.e., the minimum T-count is 412. Compared with T^†=T^7, only seven T gates are required with zero error. Therefore, when k=2, the minimum T-count for controlled-S is 8.
We analyze the minimum CNOT-count for the fault-tolerant QFT with different lengths. If the input length of the QFT is n, there are n(n-1)/2 controlled-R_k in Figure <ref>, including (n-1) controlled-R_2, (n-2) controlled-R_3, ⋯, two controlled-R_n-1, and one controlled-R_n. According to Proposition <ref>, the decomposition method for approximating the controlled-R_k with the minimum T-count is to decompose it into one R_z(π/2^k), one R_z(-π/2^k) and two CNOT gates, where the minimum T-count for approximating R_z(π /2^k) and R_z(-π /2^k) are shown in Table <ref>. At this point, the required fault-tolerant CNOT-count is also minimum. Additionally, the final stage of the QFT requires ⌊n/2⌋ swap gates, which can be implemented by 3·⌊n/2⌋ CNOT gates without any auxiliary qubits, according to Remark <ref>.
From section 4.1, we know that at least 4m fault-tolerant CNOT gates are required for the transversal fault-tolerant T gate, while the logical CNOT gate can be implemented transversally without requiring a fault-tolerant construction, using only m fault-tolerant CNOT gates. Therefore, the exact lower bound of CNOT gates (ELB_CNOT-count) for the fault-tolerant QFT with different input lengths n is
num(CNOT)=(n(n-1)/2· 2+⌊n/2⌋· 3)m+num(T)· 4m ,
where num(T) is the sum of the minimum T-count with k=2,3,⋯,n. When the accuracy is at most 10^-2, the ELB_CNOT-count for the fault-tolerant QFT with different n is shown in Figure <ref>.
The operation time of fault-tolerant CNOT gates is limited by inherent physical limitations. Especially, in ion trap quantum computers, the CNOT operation between two qubits utilizes collective excitation particles such as phonons to transmit interactions, which makes the operation efficiency limited by the propagation speed of media such as phonons. Moreover, the CNOT gates in ion traps can only be operated serially. Even if different CNOT gates involve different qubits, they cannot be operated in parallel. Therefore, CNOT gates significantly affect the operation time of quantum algorithms. In <cit.>, Yang et al. for the first time estimated the average lower bound of the time for a single physical CNOT operation in an ion trap quantum computer by analyzing the phonon speed, which is 2.85 × 10^-4 s.
In cryptographic systems, common input lengths include 64-bit, 128-bit, 256-bit, 512-bit, 1024-bit, 2048-bit, and even 4096-bit. We particularly compute their ELB_CNOT-count and estimate the lower bound of their time for the fault-tolerant QFT, as shown in Table <ref>.
In particular, when m=7, taking the famous Steane code as an example <cit.>. For shorter input lengths n, such as some lightweight cryptography, the time required to operate QFT on a quantum computer is very short, while for n=2048 and n=4096, the time is relatively long, 380.169 days and 1524.462 days respectively, which is about more than one year and more than four years. This analysis of ELB_CNOT-count can provide a reference for the idea of active defense in a quantum setting.
The search space of the minimum resources for approximating R_z(π /2^k) or R_z(-π /2^k) is 𝒪(N_0^3) for all k=3,⋯,n. In fact, as the accuracy decreases, it is challenging to determine an integer N_0 such that the minimum T-count for approximating them with given accuracy ϵ can be found within the search pace 𝒪(N_0^3) in polynomial time, given an integer N_0. For example, given the integer N_0=2^11, we have verified that the minimum T-count cannot be found in the search space 𝒪(2^33) when the accuracy is ϵ=10^-3, which implies that the search space of the minimum T-count may be much larger than 𝒪(2^33). This entire process is actually very time-consuming and requires continuously attempting to determine the value of N_0 so that the minimum T-count is found for all Z-rotation operators.
§ DISCUSSION
As shown in Theorem <ref>, the Solovay–Kitaev theorem <cit.> states that any single-qubit gate can be approximated to an accuracy ϵ with 𝒪(log ^c(1/ϵ)) gates from the discrete universal set. This number of gates grows polylogarithmically with decreasing accuracy, which is probably acceptable for almost all practical applications. It has been proven in <cit.> that the value of c cannot be less than 1 and lies between 1 and 2, close to 2, though determining the best possible value remains an open problem. We believe that the problem is at least an NP-hard problem. When the accuracy ϵ is small enough, there exists an accuracy ϵ_0 and a constant C', such that when ϵ<ϵ_0, Algorithm <ref> can be regarded as a function h(ϵ)≤ C'log ^c(1/ϵ), and the smaller ϵ is, the closer h(ϵ) is to C'log ^c(1/ϵ). Therefore, if ϵ_2<ϵ_1≪ϵ_0, h(ϵ_2)/h(ϵ_1)≈(log (1/ϵ_2)/log (1/ϵ_1))^c, where h(ϵ_1),h(ϵ_2) can be computed based on Algorithm <ref>. The K-SAT problem, including
four clauses, can be reduced to this problem of determining an positive integer N_0 such that h(ϵ_1) and h(ϵ_2) exist for approximating U with accuracy ϵ_1 and ϵ_2 respectively in the search space 𝒪(N_0^3), given a single-qubit unitary operator U, thereby this problem is at least as hard as the K-SAT problem. As a subroutine for computing the value of c, we believe that determining the best possible value is at least NP-hard.
The unique structure of the QFT circuit makes finding its optimal circuit relatively easy. However, equivalence check is QMA-complete <cit.>, meaning that verifying whether two different circuits yield the same unitary transformation is QMA-complete. Quantum circuit optimization involves finding its optimal circuit in the set of different circuits with the equivalence class of a certain unitary transformation. Therefore, the circuit optimization problems, including but not limited to quantum algorithms based on the QFT, are at least QMA-hard, i.e., given a quantum circuit, it is challenging to find its optimal version. Specifically, optimizing the circuit for Shor's algorithm is at least QMA-hard. Focusing on optimizing the CNOT-count, optimizing quantum circuits for arithmetic operations such as modular addition, modular multiplication, and modular exponentiation that are fundamental to quantum circuit decomposition, is a challenging task that typically requires continuous optimization during the design stage.
The order-finding and factoring problems based on the QFT provide evidence that quantum computers may be more powerful than classical computers, posing a credible challenge to the strong Church–Turing thesis. Our work can introduce new connotations for quantum adversaries. From a practical point of view, if efficient algorithms based on the QFT can be implemented within a meaningful time frame on a quantum computer, then they can be used to break some cryptosystems such as RSA. Otherwise, the development of quantum computers is unlikely to threaten the security of those classical cryptosystems.
§ CONCLUSION
In this paper, we study the exact lower bound problem of CNOT gate complexity for fault-tolerant QFT. We first analyze the complexity of ELBP_T-count and have shown that this problem is NP-complete. When the ancilla-free controlled-R_k is decomposed in different ways, it generates different single-qubit gates in addition to CNOT gates. As a generalization, we propose an algorithm to exactly compute the minimum T-count for approximating any single-qubit gate with any given accuracy. This algorithm combines numerical and analytical methods to avoid traversing all quantum states in the normalized space and exactly determine whether the error conditions are satisfied. We then provide a property of the proposed algorithm to analyze the minimum resources, aiming to show the NP-completeness of ELBP_T-count and compute the ELB_CNOT-count for the fault-tolerant QFT.
Due to the NP-completeness of ELBP_T-count, it would appear that computing the ELB_CNOT-count for the fault-tolerant QFT with any given accuracy is intractable. Nevertheless, we can still compute the ELB_CNOT-count under partial fault-tolerant accuracy. We have proved that the transversal implementation of universal quantum gates reaches the minimum CNOT-count. Furthermore, we approximate the Z-rotation gates after decomposing the ancilla-free controlled-R_k with the current maximum fault-tolerant accuracy 10^-2 and provide the ELB_CNOT-count with different input lengths. In particular, we estimate the lower bound of the effective execution time for the QFT based on Steane code on ion trap computers. For shorter input lengths n, such as some lightweight cryptography, the time required to operate the QFT on an ion trap computer is very short. When n=2048 and n=4096, the time required is relatively long compared to lightweight cryptography, which is 380.169 days and 1524.462 days respectively.
Finally, we discuss that determining the best possible value of c in 𝒪(log^c(1/ϵ)) implied by the Solovay–Kitaev theorem is at least an NP-hard problem, and the circuit optimization problem, such as Shor's algorithm, is at least a QMA-hard problem. Our work can introduce new connotations for quantum adversaries and provide a reference for the idea of active defense based on the QFT.
§ ACKNOWLEDGEMENT
This work was supported by the Beijing Natural Science Foundation (Grant No.4234084).
unsrt
|
http://arxiv.org/abs/2409.02090v1 | 20240903174113 | The Accelerating Decline of the Mass Transfer Rate in the Recurrent Nova T Pyxidis | [
"P. Godon",
"E. M. Sion",
"R. E. Williams",
"M. J. Darnley",
"J. L. Sokoloski",
"S. S. lawrence"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.HE"
] |
Patrick Godon
[email protected]
0000-0002-4806-5319]Patrick Godon
Department of Physics and Planetary Science, Villanova University, Villanova, PA 19085, USA
0000-0003-4440-0551]Edward M. Sion
Department of Physics and Planetary Science, Villanova University, Villanova, PA 19085, USA
0000-0002-3742-8460]Robert E. Williams
Space Telescope Science Institute, Baltimore, MD 21218, USA
0000-0003-0156-3377]Matthew J. Darnley
Astrophysics Research Institute, Liverpool John Moores University, IC2 Liverpool
Science Park, Liverpool, L3 5RF, UK
0000-0002-8286-8094]Jennifer L. Sokoloski
Columbia Astrophysics Laboratory and Department of Physics, Columbia University, New York, NY 10027, USA
0000-0002-7491-7052]Stephen S. Lawrence
Department of Physics and Astronomy,Hofstra University, Hamstead, NY 11549, USA
§ ABSTRACT
The recurrent nova T Pyxidis has erupted six times since 1890, with its last outburst in 2011,
and the relatively short recurrence time between classical nova explosions indicates
that T Pyx must have a massive white dwarf accreting at a high rate.
It is believed that, since its outburst in 1890, the mass transfer rate in T Pyx was very large
due to a feedback loop where the secondary is heated by the hot white dwarf.
The feedback loop has been slowly shutting off, reducing the mass transfer rate,
and thereby explaining the magnitude decline of T Pyx from ∼13.8 (before 1890)
to 15.7 just before the 2011 eruption.
We present an analysis of the latest Hubble Space Telescope (HST) far ultraviolet and optical spectra,
obtained 12 years after the 2011 outburst, showing that the mass transfer rate has
been steadily declining and is now below its pre-outburst level by about 40%:
Ṁ∼ 1-3× 10^-7M_⊙/yr
for a WD mass of ∼ 1.0-1.4 M_⊙, an inclination of
50^∘ - 60^∘,
reddening E(B-V)=0.30 ± 0.05 and a Gaia DR3 distance of
2860^+816_-471 pc.
This steady decrease in the mass transfer rate in the ∼decade after the
2011 ourbutst is in sharp contrast with the more constant pre-outburst
UV continuum flux level from archival international ultraviolet explorer (IUE) spectra.
The flux (i.e. Ṁ) decline rate is 29 times faster now in the last ∼decade
than observed since 1890 to ∼2010.
The feedback loop shut off seems to be accelerating, at least in the decade following
its 2011 outburst. In all eventualities, our analysis confirms that T Pyx is going through
an unusually peculiar short-lived phase.
§ INTRODUCTION
Cataclysmic Variables (CVs) are short period interacting binaries
where a white dwarf star (WD) accretes matter from its companion star
(the donor) filling its Roche lobe. The transfer of material can
be continue (as for UX UMa novalikes), sporadic (as for VY Scl novalikes
and some dwarf nova systems), or almost periodic (as for many dwarf novae)
and translates into a change in luminosity on time scales of days
to months or even years <cit.>.
Over time (years to millennia),
the accreting WDs in CVs accumulate a layer of hydrogen-rich
material which, when the layer has reached a critical mass,
provides enough temperature and
pressure at its base to initiate a thermonuclear (TNR)
runaway: the classical nova explosion <cit.>.
The larger the WD mass and the higher the mass accretion rate onto it,
the shorter the recurring time between such TNR nova explosions
<cit.>.
CVs that have suffered a classical nova explosion
are called novae, and those that have experienced more than one
nova explosion are referred to as recurrent novae (RNe;
for a review on classical novae - see <cit.>).
While mass accumulates onto the white dwarf during quiescence between recurring
nova explosions, mass is also ejected during the nova explosions themselves, and the question whether the
white dwarf mass increases or decreases over its life-time is still a matter of debate
<cit.>.
As a consequence, recurrent novae are potential progenitors
of Type Ia Supernovae (SNe Ia) as their WD may grow in mass and
reach the Chandrasekhar limit for a supernova explosion
<cit.>.
As such, accreting WDs in CVs are the site of
some of the most violent eruptions in the Galaxy, exhibiting large
luminosity changes on time-scales of ∼days to millennia.
T Pyxidis is a CV that had six nova eruptions since 1890:
in 1902, 1920, 1944, 1967, with the last outburst in 2011
<cit.>.
Because of that, T Pyx is one of the most-studied RNe,
it has also become one of the most enigmatic RNe,
and certainly the most famous RN in the Milky Way.
T Pyx is one of the three known short orbital period RNe
(together with IM Nor and CI Aql),
it is the only RN with a nova shell <cit.>
and its rise to outburst is characterized as slow
<cit.>.
The expansion of the shell is believed to have originated from a normal
classical nova eruption around the year 1866 <cit.>.
Its relatively short (and increasing) recurrence time
(12, 18, 24, 23, and 44 years) indicates, on theoretical ground
<cit.> that its WD must be massive
accreting at a high rate
<cit.>.
And indeed, optical and ultraviolet (UV) analyses <cit.> derived
a mass transfer rate (disk luminosity) anywhere between 10^-6 and 10^-8M_⊙/yr
(depending on the assumed WD mass, distance, reddening, and inclination).
However, with an orbital period of 1.83 hr, the mass transfer
rate (due to angular momentum loss by gravitational radiation)
should be very low, of the order 2 × 10^-11M_⊙/yr,
as it is the case for CV systems with an orbital period
below 2 hr <cit.>.
In order to explain the mass transfer/accretion[
Note that we are neglecting here outflow from the disk and WD and
use the term `mass transfer rate' when considering the disk
(or the Roche lobe overflow of the secondary), and `mass accretion
rate' when considering the accretion disk and WD, assuming that they are nearly equal:
we use Ṁ for both. It is understood that the mass accretion
rate might be slightly smaller than the mass transfer rate due to
possible outflow.
] rate discrepancy of T Pyx and other
novae, several theories have been advanced.
<cit.> suggested that novae hibernate for millennia between
eruptions to explain their (very low) space density in the
solar neighborhood and justify the fact that old novae have
low Ṁ while recent novae have a higher mass accretion.
During a nova eruption, mass loss dominates, increasing the binary
separation and Roche lobe radius. As a consequence, the secondary loses contact
with the inflated Roche lobe and mass transfer basically stops after the eruption and
after irradiation from the cooling WD becomes negligible. This explains the
high Ṁ after the eruption and its decline thereafter, up to the point
where hibernation starts (Ṁ < 10^-12M_⊙/yr), lasting 1000s of years,
during which the binary separation decreases slowly due to angular momentum loss from
magnetic breaking (above the gap) or gravitational radiation (below the gap).
<cit.> suggested that in this manner
most novae spend 90-99% of their lives as detached binary.
In this scenario, the high mass transfer rate would have been sustained
by the irradiation of the secondary by the white dwarf, itself heated due to
accretion <cit.>.
Such a self-sustained feedback loop process would have been triggered
during a classical nova eruption in 1866 <cit.>, where the high mass
accretion rate would occur with nuclear burning on the WD surface
(self-sustained supersoft source).
However, it has been shown <cit.>,
that the B-magnitude of T Pyx has been steadily decreasing
from B=13.8 before the 1890 eruption to B=15.7 just before the 2011 eruption,
indicating that the self-sustained feedback loop between the WD
and secondary might be shutting off, in agreement with the hibernation theory.
It has also been proposed <cit.> that the high mass transfer rate in T Pyx
could be the result of the evolution of triple star system, where the inner binary (WD + donor star)
would become so eccentric that mass transfer is triggered at periastron, driving
the secondary out of thermal equilibrium.
<cit.> showed that with a mass transfer rate of ∼ 10^-7M_⊙/yr and a nova ejecta mass
of 3 × 10^-5M_⊙ (6.7 times larger than the accreted mass between novae),
the present series of nova eruptions are eroding the WD, and the secondary
will evaporate in 10^5 yr, unless the recurrent nova eruptions are short-lived.
All these analyses agree that T Pyx must be going through a very unusual
and short-lived phase in its life
<cit.>.
In the current work, we present an analysis of the latest
Hubble Space Telescope (HST) UV and optical spectra from
March 2023. The UV spectrum was obtained with
the Cosmic Origin Spectrograph (COS), while the optical spectrum was
obtained using the Space Telescope Imaging Spectrograph (STIS).
This is the first combined optical (STIS) and FUV (COS)
spectroscopic observation of T Pyx during the deep quiescent phase
to model the accretion disk: the inner disk radiates mainly in the UV,
while the outer disk radiates mainly in the optical.
The results of our analysis indicate that the mass accretion rate is still decreasing
compared to the HST data from 2015-2016 <cit.> and 2012-2013 <cit.>,
it has now reached a level that is 40% below its pre-outburst IUE value.
Such a steady decrease in Ṁ is unexpected, since
all the IUE spectra obtained through
the 90's have the same flux level as the 1980 IUE spectrum
and show no drop in flux (except for orbital variation).
This could indicate that the decrease in the mass transfer rate started to
accelerate after the 2011 outburst.
In the next section we discuss the system parameters that we adopted in the
present work; in 3 we present the latest HST data together with archival
data for our analysis; the tools we used and the results obtained are
presented in 4, follow by a discussion and summary in the last section.
§ SYSTEM PARAMETERS
In our previous analysis of T Pyx <cit.> we analyzed HST COS UV spectra
obtained in October 2015 and June 2016 and investigated the effect of the assumed
WD mass (0.7 M_⊙≤ M_ wd≤ 1.35 M_⊙), reddening (0.25 ≤ E(B-V) ≤ 0.50),
distance (2.8 kpc ≤ d ≤ 4.8 kpc), and inclination (20^∘≤ i ≤ 60^∘)
on the results (Ṁ). Therefore, we will not repeat this in the current work.
Instead, and unless otherwise indicated, we assume here a large WD mass (M_ wd = 1.00-1.37 M_⊙),
a reddening of E(B-V)=0.30±0.05,
a Gaia DR3 parallax-derived distance of 2860^+816_-471 pc,
and an inclination i=50^∘-60^∘. Here below we justify our choice.
The value of the system parameters we use for the analysis are listed in Table 1.
Taking the latest DR3 Gaia parallax to the system and following <cit.>, we compute a distance
of 2860^+816_-471 pc, which is smaller than the distance originally
derived from the light echo <cit.> and
the distance we used in our previous spectral analysis based on the DR2 Gaia parallax
<cit.>.
Using recent data from the Multi Unit Spectroscopic Explorer (MUSE)
from the European Southern Observatory (ESO) in Chile, <cit.> characterize
the morphology of the ejecta surrounding the system.
They found that the expelled material consists of a ring of matter together with a
bipolar outflow perpendicular to the ring.
The inclination of the remnant along the line of sight is i=63.7^∘,
and is expanding at a velocity of 472^+77_-72km/s.
They put an upper limit
to the bipolar outflow ejecta mass, M_ ej,b < (3 ± 1) × 10^-6M_⊙,
which is lower than previous estimates.
It is believed that the bipolar outflow originated from the 2011 outburst
(since it wasn't observed before, and was first observed by HST in 2014).
Consequently, we consider here an inclination i≈ 50-60^∘,
<cit.> to account for the large amplitude (∼ 20%) optical and
UV modulation in the continuum flux level as a function of the orbital phase and
to agree with the analysis of <cit.>.
As to the WD mass, on the one hand, based on the short recurrence time of T Pyx outbursts (of the order of 20 yr or so),
the theory predicts <cit.> that
the WD in T Pyx must be very massive
<cit.>
accreting at a very large rate.
On the other hand, X-ray observational evidence tends to point to a lower
mass of the order of 1.00 - 1.15 M_⊙ <cit.>.
Accordingly, in the current analysis we assume a WD mass
M_ wd=1.0, 1.2, and 1.37 × M_⊙,
and we disregard the low WD mass (0.7 M_⊙) derived by <cit.>, since it was retracted <cit.>.
This is in line with <cit.> who showed that
extensive simulations of nova eruptions combined together with observational
databases of outburst characteristics of Galactic classical novae and recurrent novae
yield for T Pyx a WD mass of 1.23 M_⊙ (± 0.1 M_⊙ or so) with a mass accretion rate of
6.3 × 10^-8M_⊙/yr (but no error estimated given on Ṁ)
for the 44 year inter-outburst period between 1967 and 2011.
For the reddening we limit ourselves to the value we derived previously in <cit.>.
We must stress that the uncertainties in the values of the system parameters
(WD mass, distance, inclination, extinction, chemical abundances, etc.. ;
which are used a input for the analysis) are relatively
much larger than the errors in the analysis results that depend on them.
llcclllcc[h!]
0pt
T Pyxidis System Parameters
Parameter P_ orb i Π - Gaia d E(B-V) M_ wd
Units (hr) (deg) (mas) (pc) (M_⊙)
Adopted Value 1.8295 50-60 0.34674±0.0287 2860^+816_-471 0.30±0.05 1.0, 1.2, 1.37
Unless otherwise specified, these are the values of the system parameters we used
in the present analysis of the 2023 HST spectra (see text for details).
§ THE DATA
In this research we analyze the most recent HST UV-Optical spectral data
we obtained in 2023. For comparison and to complement the analysis we also
present the HST UV data we obtained in 2018-2019, HST UV data from our previous analyses
(2012, 2013, 2015, 2016), IUE pre-outburst data,
and some never-published HST optical data obtained in 2014.
Since the IUE data and our previous HST UV data were already presented in
<cit.>, we tabulate here only the data that weren't presented elsewhere:
COS UV data from Oct 2018, Feb 2019, March 2023, STIS optical data
from March 2023, and STIS optical data obtained in 2014 (PI A. Crotts)
which were never published. All the data are listed in Table 2.
These observations were obtained with four different instrument configurations
as follows.
-1) The COS instrument (FUV MAMA, TIME-TAG mode) was set up with the PSA aperture, with the G130M grating
with a central wavelength of 1055 Å, producing a spectrum starting at 925 Å
all the way to 1200 Å, with a small gap near 1050 Å(and therefore covering all
the series of the hydrogen Lyman transitions, except Lyα).
-2) The COS instrument (FUV MAMA, TIME-TAG mode) was set up with the PSA aperture, with the G140L grating
with a central wavelength of 1105 Å, producing a spectrum from 1100 Å
to ∼2100 Å, covering the H Lyα absorption feature.
-3) The STIS instrument (CCD, ACCUM mode) was set up with the G430L grating centered at 4300 Å,
generating a spectrum from ∼3,000 Å to ∼5,700 Å.
-4) The STIS instrument (CCD, ACCUM mode) was set up with the G750L grating centered at 7751 Å,
generating a spectrum from ∼5,250 Å to almost 10,000 Å, thereby covering
the optical and near infrared region.
The COS data were processed with CALCOS version 3.4.4 and the
STIS data were processed with CALSTIS version 3.4.2.
We used the x1d and sx1 files to extract the 1D spectra from each individual
exposures, and used the x1dsum files to extract spectra from
co-added exposures (such as for the COS data obtained on 4 different
positions of the detector).
§.§ The 2023 HST COS FUV and STIS Optical Data.
The 2023 data consist of one of each instrument configuration above and were all obtained concurrently,
the same day, March 24th, 2023, between about midnight to 11am - see Table 2.
Namely the 2023 data cover the FUV, UV, optical, and NIR, and produce the only concurrent UV-optical-NIR
spectra of T Pyx from ∼900 Å to ∼10,000 Å (with a gap between 2000 Å and 3000 Å).
These 4 concurrent UV-optical spectra are of special importance, since they are the only ones obtained
concurrently after the 2011 outburst and during deep quiescence when all the emission is from
the accretion disk; These 4 2023 spectra are the focus of the present analysis and are
modelled in 4 with an accretion disk.
We present these four spectra in Figs.<ref>, <ref>, <ref>,
and <ref>, in order of increasing wavelength.
cccccccccc[h!]
0pt
Observation Log
Instrument Apertures Filter Central Date Time ExpTime Data MODE Project
Gratings λ(Å) yyyy-mm-dd hh:mm:ss (s) ID ID
STIS 52x0.1 G430L 4300 2023-03-24 08:18:47 1699 OEWH02010 ACCUM 17190
STIS 52x0.1 G750L 7751 2023-03-24 10:26:35 2340 OEWH02020 ACCUM 17190
COS PSA G140L 1105 2023-03-24 00:19:41 1912 LEWH01010 TIME-TAG 17190
COS PSA G130M 1055 2023-03-24 01:46:13 2405 LEWH01020 TIME-TAG 17190
COS PSA G130M 1055 2019-02-01 12:58:11 1836 LDG002010 TIME-TAG 15184
COS PSA G140L 1105 2018-10-04 00:30:11 1876 LDG001010 TIME-TAG 15184
STIS 52x2 G430L 4300 2014-07-21 02:29:59 378 OCIQ02010 ACCUM 13796
02:38:11 378 OCIQ02020 ACCUM 13796
02:46:23 378 OCIQ02030 ACCUM 13796
02:54:35 378 OCIQ02040 ACCUM 13796
03:51:04 558 OCIQ02050 ACCUM 13796
04:04:42 558 OCIQ02060 ACCUM 13796
04:15:54 558 OCIQ02070 ACCUM 13796
04:27:06 558 OCIQ02080 ACCUM 13796
05:26:37 552 OCIQ02090 ACCUM 13796
05:40:35 552 OCIQ020A0 ACCUM 13796
05:51:41 552 OCIQ020B0 ACCUM 13796
06:02:47 552 OCIQ020C0 ACCUM 13796
07:08:57 544 OCIQ020D0 ACCUM 13796
07:19:55 544 OCIQ020E0 ACCUM 13796
07:30:53 544 OCIQ020F0 ACCUM 13796
08:40:09 544 OCIQ020G0 ACCUM 13796
STIS 52x2 G750L 7751 2014-07-23 21:23:16 373 OCIQ03010 ACCUM 13796
21:31:24 373 OCIQ03020 ACCUM 13796
21:39:32 373 OCIQ03030 ACCUM 13796
21:47:40 373 OCIQ03040 ACCUM 13796
22:44:01 558 OCIQ03050 ACCUM 13796
22:57:39 558 OCIQ03060 ACCUM 13796
23:08:51 558 OCIQ03070 ACCUM 13796
23:20:03 558 OCIQ03080 ACCUM 13796
2014-07-24 00:19:33 552 OCIQ03090 ACCUM 13796
00:33:31 552 OCIQ030A0 ACCUM 13796
00:44:37 552 OCIQ030B0 ACCUM 13796
00:55:43 552 OCIQ030C0 ACCUM 13796
01:55:53 543 OCIQ030D0 ACCUM 13796
02:06:51 543 OCIQ030E0 ACCUM 13796
02:17:49 543 OCIQ030F0 ACCUM 13796
02:28:47 543 OCIQ030G0 ACCUM 13796
The time (hh:mm:ss) is the start time for each exposure.
All the data presented were obtained from the Mikulski Archive for Space Telescope (MAST) at the
Space Telescope Science Institute, Baltimore, MD, USA. The specific spectral data listed
above can be accessed via
[10.17909/2d6h-qy95]https://doi.org/10.17909/2d6h-qy95.
COS Data were processed through the pipelines with CALCOS version 3.4.4.
STIS Data were processed through the pipelines with CALSTIS version 3.4.2.
The short wavelength COS (FUV) spectrum is displayed in Fig.<ref> on four
panels, it is very noisy below 1090 Å (first/upper panel).
All the absorption lines are from the interstellar medium (ISM), dominated mainly by molecular hydrogen
(H_2), with some C i, and Fe ii lines. The identification of the H_2 molecular lines
by their band, upper vibrational level, and rotational transition can be found e.g. in <cit.>.
The rather flat shape of the continuum flux level indicates that the emitting source is rather hot
and is consistent with the inner part of the accretion disk.
The long wavelength COS (UV) spectrum is displayed in Fig.<ref>,
also on four panels.
Except for the N v doublet (which is blue shifted by ∼6 Å),
all the absorption lines are from the ISM. Due to the relatively lower continuum
flux level during deep quiescence, the S/N is not high enough to detect all the
ISM lines which were observed in the early phase following the outburst
by <cit.>.
Each COS spectrum is generated from the sum of 4 subexposures, each obtained
on a different location on the detector.
We checked the 4 subexposures each of the two 2023 COS spectra and did not
find, within the amplitude of the noise/error, any variation in the width, depth, and
wavelength of the absorption lines that could reveal orbital modulation,
even for the N v doublet.
However, this is likely an indication that the
subexposures are too noisy to extract any significant information.
The rest wavelength of the N v doublet lines are 1238.821 Å & 1242.804 Å<cit.>,
and to within ±0.1Å the observed wavelengths in the 4 subexposures
are 1232.9 Å & 1237.0 Å, 1233.3 Å & 1237.1 Å,
1233.1 Å & 1237.0 Å, and 1233.0 Å & 1237.1 Å.
This gives an average blue shift of ∼5.7±0.2 Å,
which at 1240 Å corresponds to a velocity of 1,384±49 km/s.
The STIS optical-NIR spectra are presented in Figs.<ref> & <ref>.
Contrary to the previous optical spectra, these 2 spectra are no dominated
by nebular emission,
they exhibit absorption and emission lines from hydrogen and helium,
and we tentatively identify some weak emission lines from
C iii (4650 Å) and
[Fe vii] (5168 Å).
§.§ Earlier Archival UV and Optical Data
The existing pre-outburst UV archival data of T Pyxidis
consist of more than 50 IUE SWP+LWP spectra from 1980
(∼13 years after the Dec 1966/Jan 1967 outburst)
through the 90's and one Galex (FUV + NUV) spectrum taken at
the end of 2005. The pre-outburst data reveal a UV continuum flux level remarkably
constant, except for an orbital phase modulation.
The HST COS & STIS UV spectra, all obtained post-outburst,
follow the decline of the system into its quiescent state,
starting May 2011. By December 2012, the strong broad emission lines
had disappeared and the UV continuum flux level
had reached its pre-outburst (IUE) level, after what
the UV flux continued to decrease, but more slowly <cit.>.
We expected the UV decline would have reached a plateau by 2023,
mimicking the post-outburst IUE Data. However, since Oct 2018
the UV flux has further dropped by ∼20% and is now about 40%
below its IUE pre-outburst level - see Fig.<ref>.
Even the C iv (1550) and He ii (1640) emission lines,
which were prominent in the IUE pre-outburst and HST post-outburst spectra,
are now much reduced: the intensity of the
C iv line in 2023 is ∼1/6 of what it
was in 2018, and that of the Heii line is ∼1/3.
Note that the COS G130M 1055 spectra (Fig.<ref>a)
are extremely noisy below ∼1090 Å (as seen already in
Fig.<ref>).
The 1150-1200 Å region of the 2023 COS G130M (1055)
(red spectrum in Fig.<ref>a)
doesn't match the 2023 COS G140L 1105 spectra
(also red in Fig.<ref>b).
A similar discrepancy in fluxes was apparent in the October 2015 COS data
of T Pyx <cit.>
between the two configurations (G140L/1105 vs. G130M/1055),
and, while some of the discrepancy could be attributed to orbital
modulation, it is mainly due to calibration errors (edges of the detectors).
The two Si ii lines (1190.4 & 1193.3 Å) are clearly seen in the
G130M (1055) spectra (and even in the IUE spectrum;
right edge of panel (a) of Fig.<ref>)
but they are absent in the G140M (1105) spectra
(blue and red spectra; left edge of panel (b) in Fig.<ref>).
As T Pyxidis erupted in 2011, it became the target of several observing campaigns
and many optical spectra were obtained with HST/STIS.
The latest HST optical spectra of T Pyx collected before our current HST 2023 observation
are from July 2014: OCIQ020 (STIS G430L) and OCIQ030 (STIS G750L),
made of 16 exposures each (see Table 2).
All the optical STIS spectra following the outburst and through July 2014 reveal the presence of nebular
emission lines.
We extracted the 32 1D spectra from the July 2014 STIS data (as listed in
Table 2) and co-added the 16 exposures each for the OCIQ020 and OCIQ030 sets
by weighting them by exposure time. We then combined the OCIQ020 and
OCIQ030 spectra together.
A comparison with the 2014 optical STIS spectra (Fig.<ref>) reveals that the optical
continuum flux level (near ∼4,500 Å) has now dropped by ∼ 38% and the nebular emission
lines have almost all completely disappeared.
In the very long wavelength range (near ∼8,000 Å) corresponding to the NIR the
continuum flux level has dropped by ∼35%.
Note that the 2014 and 2023 spectra displayed in Fig.<ref> were
generated by co-adding the individual exposures weighted by the exposure
time for each of the COS configuration G430 and G750L.
Since the G430 and G750L spectra covers less than
the binary orbital period, their continuum flux level did not match perfectly.
This is most apparent in the 2023 spectra which have shorter exposure times
(∼ 2000s) than the 2014 spectra (totalling more than 8,000s, but covering
only 80% of the binary orbital period due to the timing of the exposures).
For the 2014 spectrum, the G750L segment has to be scaled down by ∼1%
to match the G140M segment;
for the 2023 spectrum, the G750 segment has to be scaled up by 7%
to match the G140M segment.
§ ANALYSIS
§.§ Variability
It has been well documented <cit.> that the (B-band) magnitude
of T Pyx has been steadily decreasing from B=13.8
in 1890 to B=15.7 just before the 2011 eruption.
And in recent years, as T Pyx returned to quiescence following the 2011 outburst,
it has gradually become fainter from 15.8 to 16.1 <cit.>.
In order to check the behavior of T Pyx in the UV, we generated a UV
light curve of the system <cit.> using archival UV spectra from IUE
(from 1980 to 1996), Galex <cit.>,
and HST STIS & COS UV spectra (following the 2011 outburst).
We display in Fig.<ref> an updated UV light curve using
the latest HST spectra (from 2023 and 2018).
All the data points were obtained by integrating the UV
spectral flux between 1400 Å and 1700 Å (excluding emission and absorption lines).
The IUE do not reveal a decrease in the continuum flux level
between 1980 and 1996, and clearly show a modulation
Δ of up to 18% (i.e. ±9%) in the UV continuum flux level.
Unfortunately, the IUE single exposures had all a duration of
∼1 to ∼2 times the binary orbital period, and a phase resolved UV light
curve could not be generated.
However, we attribute this modulation to the orbital motion of the binary.
Since the UV light curve shown in the figure results from an integration
over a large spectral range, the flux error is much reduced and completely
negligible in comparison to the uncertainty due to the orbital
modulation.
The HST data points also reveal orbital modulation but to a lesser
extent since not every orbital phase was covered.
Following the 2018 HST observation, we expected the UV flux to reach a plateau
but the 2023 data point shows a further drop of 20% compared to the
2018 data. While we cannot rule out that
this could be due, in part, to orbital modulation, the UV light curve after
the 2011 outburst definitely exhibits a trend consistent with a steady decline
in sharp contrast with the pre-outburst light curve.
As for most novae and recurrent novae, many more HST observations were
carried out during outburst and decline from outburst than during deep quiescence,
and except for our current 2023 STIS optical spectrum,
all the HST optical spectra were obtained between 2011 and 2014,
while the system still showed nebular emission.
Among these HST optical spectra we selected the STIS datasets OCIQ020
(G430L) and OCIQ030 (G750L), each with 16 exposures (see Table 2) from July 2014:
the emission lines of forbidden transitions forming in the nebular material
are still present (see Fig.<ref>).
These two datasets were obtained only two days apart and while they cover
different spectral wavelength regions, they do overlap between 5245 Å and 5690 Å.
We therefore integrated the flux of the 32 exposures between
5285 Å and 5655 Å (excluding emission lines), to compute the average continuum flux level
in that wavelength region, which corresponds the yellow-green color “chartreuse”.
In Fig.<ref> we present the chartreuse light curve folded at the
orbital phase which clearly reveals the orbital modulation
of the continuum flux level with an amplitude of ± 8%, similar to the UV data.
The flux is minimum near phase 0.9, where the L1-stream is hitting the rim of the disk
and indicates that the disk edge might be swollen and partially occulting the disk
<cit.>.
The orbital phase values were computed using the post-outburst ephemerides
provided by <cit.> which takes into account the
period change of the system, and taking the mid-value of the
observation times of each exposure listed in Table 2
(namely we added half the exposure time to the starting time).
The flux error on the integrated wavelength region is of the
order of 5 × 10^-18erg/s/cm**2/Å,
the error on the orbital phase is taken as (half) the exposure time as listed in
Table 2 for each exposure (namely 387/2s for the first 4 exposures, 558/2s
for the next 4, etc..).
§.§ The Spectral Slope
In <cit.>, we used archival optical and NUV (IUE and Galex) spectra
to supplement our HST FUV post-ourburst
spectra in our accretion disk modeling and
found that the slope of the continuum flux level is flatter in the
optical than in the UV. However, these pre-outbursts archival optical and NUV data were
not obtained concurrently with the same telescope, and the optical data were obtained from
ground-based telescopes and digitally extracted from graphs.
Therefore, we decided to carry out a new assessment of the slope of the spectrum
using an updated and improved dereddening law and
using UV and optical data obtained the same day with HST: the 2023 HST STIS and COS spectra.
In the FUV, instead of using the extinction law of <cit.>,
we used the standard curve of <cit.>, which gives a smaller
correction in the FUV (and therefore a shallower slope in the FUV).
This is in line with the analysis of T Pyx by <cit.>, based on the work
of <cit.> who showed that in the FUV the observed extinction
curve is consistent with an extrapolation of the standard extinction curve
of <cit.>.
In Fig.<ref> we display the combined 2023 (UV+Optical+NIR) spectrum
of T Pyx dereddened assuming E(B-V)=0.25, 0.30, and 0.35 (the values we adopted)
on a log-log scale of the flux F_λ (in erg/s/cm^2/Å) vs wavelength λ
(in Å).
The steepest (continuum) spectral slope in the UV (at wavelengths longer than
the Lyα region, λ > 1300 Å)
is obtained for a dereddening of E(B-V)=0.35
and has value α=-2.76, while the flattest optical
(λ∼ 3000-6000 Å) slope is obtained
for a dereddening of E(B-V)=0.25 and gives a slope α=-2.99, larger than
the UV. At longer wavelength (NIR, λ∼7000-10,000 Å)
the slope steepens even more (< -3).
Namely, we find that the spectral slope steepens with
increasing wavelength, thereby confirming the findings of
<cit.>. This finding is valid for the values of
E(B-V) we (and <cit.>) use when dereddening the spectra for
the analysis of T Pyx optical and UV data.
While we found that both the UV and optical continuum flux levels vary
by about the same amplitude as a function of the orbital phase,
their slope did not reveal orbital modulation.
§.§ Accretion Disk Modeling & Spectral Analysis
The spectral analysis procedure we follow is extensively described
in <cit.> and only a short overview is given here.
We use the suite of FORTRAN codes tlusty & sysnpec
<cit.> to generate accretion disk spectra
for a given WD mass M_ wd, mass transfer rate Ṁ, inclination i,
and inner & outer disk radii (r_ in,r_ out).
The accretion disk is based on the standard disk model <cit.>,
it is assumed to be optically thick and has solar composition.
We generate a grid of disk spectra for
M_ wd=1.0, 1.2, 1.37M_⊙, r_ in=R_ wd,
10^-8 M_⊙/yr≤Ṁ≤ 10^-6M_⊙/yr (increasing
or decreasing Ṁ in steps of ∼50%), and for i=50^∘, 60^∘.
These theoretical spectra extends from 900 Å to 7,500 Å.
We first assume a WD mass of 1.37M_⊙, and it is understood that
accretion disk model fits with a
lower WD mass (see further down) will result in a larger mass accretion rate.
With a secondary mass of 0.13M_⊙, and an orbital period of 1.8295 hr,
we obtain a
binary separation of 585,592 km. For such a mass ratio (log(q)≈ -1.0),
the outer radius of the disk is expected to be tidally truncated
at r_d ≈ 0.5a <cit.>, where a is the binary separation,
while the Roche lobe radius of the WD is about 0.6a.
We note that for a mass ratio close to one the tidally truncated disk radius is close to 0.3a
<cit.>, while for a vanishingly small mass ratio it is close to 0.6a
<cit.>.
We compute disk models assuming an outer disk radius of 180,000 km
(90R_ wd) and 360,000 km (180R_ wd) ,
corresponding to about ∼0.3a and ∼0.6a respectively
(where we have assumed a 2,000 km radius for T Pyx's WD).
As we lower the WD mass to 1.0 × M_⊙ we obtain a binary
separation much closer to 5 × 10^5 km and vary the outer disk
radius accordingly.
We first carry out accretion disk spectral fits assuming M_ wd=1.37M_⊙, and for the
following values of the parameters: i=50^∘ & 60^∘,
outer disk radius r_ out=0.3a & 0.6a, E(B-V)=0.25,30,35 and a Gaia distance of
2389 pc, 2860 pc, & 3676 pc (see Table 1 for system parameters).
For each set of (i,r_ out,E(B-V),d) the spectral fits yields a
unique value of the mass accretion rate Ṁ.
In Fig.<ref> we display two of the accretion disk spectral fits we
ran assuming an inclination of 50^∘, extinction E(B-V)=0.30, and
the Gaia distance of 2860 pc. In one model the disk was truncated at
0.6a and in the second model it was truncated at 0.3a.
The resulting mass accretion rate is Ṁ=1.07 × 10^-7M_⊙/yr
for the larger disk, versus 1.28 × 10^-7M_⊙ for the smaller disk.
Since most of the flux is emitted in the UV range, we mainly fit the
UV region, and it appears immediately that the smaller disk doesn't
fit the optical as it is too blue. On the other hand, the larger disk
displays a small Balmer jump which is not seen in the observed spectrum
(we will extend on this issue in the next section).
While we are aware that it is likely that the disk radius is about 0.5a, we did
ran models for both the 0.3a and 0.6a values.
For all the values of the parameters considered here
(and including their uncertainty),
the results can be summarized as follow: we obtain a mass accretion rate of
Ṁ = 1.38_-0.87^+1.17× 10^-7M_⊙ / yr
for
i=55^∘± 5^∘, r_ out/a =0.45±0.15,
E(B-V)=0.30±0.05, and d=2860^+816_-471 pc,
assuming a near-chandrasekhar WD mass of 1.37M_⊙.
The error in Ṁ is due mainly to the uncertainty in the distance
and reddening, while the uncertainty in the value of the outer disk radius
contributes less than 10% to the (relative) error in Ṁ.
Next we assume a WD mass M_ wd=1.2 M_⊙ and M_ wd=1.0 M_⊙,
and obtain similar results with a larger mass accretion rate:
Ṁ = 2.16_-1.36^+1.81× 10^-7M_⊙ / yr, and Ṁ = 2.94_-1.85^+2.46× 10^-7M_⊙ / yr,
for
M_ wd=1.2 M_⊙ and M_ wd=1.0 M_⊙, respectively.
§ DISCUSSION & CONCLUSION
The present UV-optical analysis shows that the mass accretion in T Pyx
has been steadily decreasing, and is now of the order of 10^-7M_⊙/yr
(based on the assumed system parameters), about 40% lower than
its pre-outburst value assessed from archival IUE spectra.
The decreased activity of the system is further supported by
the weakened C iv (1550) and He ii (1640) emission lines
(in the COS G140L spectrum from March 2023),
which were prominent in the IUE pre-outburst and HST post-outburst spectra.
The mass accretion rate, however, cannot be determined
accurately since the reddening, distance and WD mass have not been
themselves assessed with a high accuracy. The WD mass is assumed
to be large (∼ 1 M_⊙ or even near-Chandrasekhar) on theoretical ground (as explained in 1),
the Gaia parallax has a relatively large error, and while we assume
E(B-V)=0.30±0.05, some authors have derived an extinction
as large as E(B-V)=0.5±0.1 <cit.>. In our previous
work we showed how the uncertainty in the system parameters
(M_ wd, E(B-V), d, and i) affects the derived
mass accretion rate by an order of magnitude:
Ṁ∼ 10^-7± 1M_⊙/yr <cit.>.
Another source of uncertainy is the chemical composition of the accretion disk.
We assume solar abundances for the accretion disk, but the donor/secondary
could have non-solar abundances affecting the shape and slope of the disk spectrum
(if highly suprasolar/hydrogen deficient). Here too the problem is that the state of
the secondary star in T Pyx is unknown.
Absorption lines of metals (i.e. Z>2) for different temperatures in the disk cannot be detected due to the
combined action of Keplerian broadening and superposition.
As a consequence, it is the
hydrogen content (and more precisely the [H/He] ratio) that dictates
the general shape of the spectrum <cit.>, and only for large
values of [H/He] <cit.>
is the shape of the spectrum noticeably affected.
Hence, our solar composition results are valid as long as the actual metalicity of the accretion
disk (and therefore secondary donor star) doesn't depart too much from solar.
This is a sound assumption, since even evolved-donor CV systems, with high N and
low C abundances, are not all strongly hydrogen deficient <cit.>.
This is fortunate, since generating accretion disk models from scratch
as a function of chemical abundances is prohibitively CPU-expensive
(as we already vary the disk input parameters such M_ wd,
Ṁ, i, and the disk radius) and the version of TLUSTY
we are using is not well suited to generate helium dominated spectra.
In comparison to the above uncertainty in the system parameters, the
systematic error in the modeling of the disk is rather negligible.
Whether we chose a disk radius of 0.3a or 0.6a didn't affect
the results quantitatively as much as it did qualitatively.
For a mass transfer rate of
∼ 10^-7 M_⊙/yr, the temperature in the disk at r=0.3a is
25,000 K and drops to 15,000 K at r=0.6a. As a consequence,
the Balmer Jump is much reduced for an outer disk radius r_ out=0.6a,
and is absent for r_ out=0.3a because of the higher temperature.
The slope of the continuum flux level of a r_ out=0.3a accretion disk
is steeper (bluer) than that of a r_ out=0.6a accretion disk,
but since the smaller disk has a smaller emitting
surface, its flux (for the same Ṁ) is lower than that of the larger disk.
As a consequence, the smaller disk requires a larger Ṁ
(than the larger disk) when fitting the observed spectrum (since the
disk models are scaled to the distance).
When fitting the HST spectrum, the small disk models were too steep in the
optical, while those with a large outer radius exhibited a small Balmer jump
that was not observed. These anomalies are, however, a well-known
problem in the modeling of CV WDs accreting at a high rate such as novalikes in high
state and dwarf novae in outburst
<cit.>.
Many suggestions have been advanced to explain the discrepancy and address
the problem, such as modifying the disk radial temperature profile <cit.>,
increasing the inner radius of the inner disk <cit.>. Some suggested irradiation
of the disk <cit.>, others suggested emission from disk winds <cit.>.
More recently large scale magnetic fields radially transporting angular momentum
and energy have been invoked <cit.>, as well as a disk model where the
energy dissipation occurs as a function of height (z) resulting in
emission from optically thin regions <cit.>.
The problem is still a matter of debate and could actually be a combination
of several of the scenarios suggested here together <cit.>.
Another source of uncertainty in disk modeling has been the orbital modulation
of the continuum flux level observed both in the UV and optical with a relative
amplitude of 8-9%. This has been, so far, attributed to the geometry of the
system where the disk likely self-eclipses due to its higher vertical extent
where it is hit by the L1-stream <cit.>.
We note that <cit.> suggest that the unusually high mass accretion rate in T Pyx
could be the result of triple binary evolution. In that scenario,
due to the disturbing effect of a distant companion (the tertiary),
the inner binary (WD+secondary) orbit can be significantly eccentric,
triggering mass transfer close to periastron passage
<cit.>. This periodic gas stripping can drive the
secondary out of thermal equilibrium and intense bursts of mass
accretion onto the WD can then kick-start an irradiation-induced
wind-driven mass-transfer phase <cit.>.
<cit.> conclude that
the current high-Ṁ state of T Pyx is likely associated with
the high eccentricity of the (inner) binary orbit in the triple system.
In that case, the light curve variability of T Pyx might not be due
only to the geometry of the rotating binary system, instead it would
be affected by the periodic mass transfer/stripping near periastron
(periodic increased in Ṁ) and its accretion onto the WD.
The non-zero eccentricity of the binary orbit is consistent with the possibility of
the asynchronous rotation raised by <cit.>.
Though the mass accretion rate we derived of ∼ 10^-7M_⊙/yr cannot be
firmly confirmed due to all the uncertainties cited above, it is consistent
with previous estimates <cit.>.
In order for our modeling to agree with
<cit.>'s accretion rate, we would have to assume a much smaller reddening
and distance, which would be inconsistent with the Gaia distance and the lower limit
for the reddening.
One might argue that the relatively high mass accretion rate we obtain (> 10^-7 M_⊙) must result
in steady nuclear burning on the WD surface and/or affecting the WD structure
(inflating the radius of its outer envelop).
However, the super-soft X-ray emission in T Pyx began to turn off six months after the
outburst <cit.>, an indication that the nuclear burning on the WD
surface stopped.
Furthermore, the COS FUV spectral slope
is consistent with that of an accretion disk and does not accomodate for any
significant contribution from a hot WD component.
If we try to fit the FUV slope with a single temperature component,
it is consistent with a temperature of the order of 30-40,000 K only.
This implies that the flux we observed is likely due almost entirely to the accretion disk.
Interestingly enough, the optical light curve of T Pyx (attributed to the
L1-inflated disk rim eclipsing the heated secondary) is very similar to the
light curve of one of the prototype supersoft X-ray sources CAL 87
<cit.>, as already pointed out by <cit.>.
In spite of all these uncertainties, it is uncontrovertible that the
UV flux, and therefore the mass accretion rate, is now below its pre-outburst
level by about 40%.
This large decrease in Ṁ in the ∼decade after the
2011 outburst is in sharp contrast with the rather constant pre-outburst
UV flux from the IUE spectra.
No such data were collected after the previous outburst for comparison, as
the earliest IUE spectrum was obtained in 1980, 131/2 years after
the Dec 66-Jan 67 outburst.
However, all the IUE spectra obtained through
the 90's have the same continuum flux level as the 1980 IUE spectrum
and show no drop in flux (except for orbital variation).
For comparison, <cit.> showed that since its outburst in 1890,
the mass accretion rate in T Pyx has been declining by a factor of
5.7 in 122 yr, while the UV 40% decrease in 12 years translates into
a decline by a factor of ∼165 in ∼120 yr,
which is a factor of ∼ 29 faster.
Even in the optical, the AAVSO data show a decline of about 1 mag (v)
since its pre 2011 outburst value to present day, which
is almost twice as fast as in the 1890-2011 light curve data.
This could be an indication that the self-sustained feedback loop between the WD
and secondary might is shutting off at an accelerating rate after the last outburst,
in agreement with the hibernation theory.
Observations in the next 5-10 years will be able to assess
whether Ṁ continues to drop at such a high rate,
or whether this is just part of a phase related to
the decline from the 2011 outburst.
For example, the UV flux level could reach a plateau within the next coming years,
based on the post 1966-67 outburst IUE data showing a constant flux level
through the 80s and 90s, and assuming the behavior of T Pyx is to be the same.
In that case the drop in Ṁ is much more pronounced in the decade
following each outburst (and possibly due to the outburst itself).
Otherwise, if the UV flux continues to drop at the same rate, mass transfer
will likely completely shut off within a few hundred years and T Pyx
will enter a hibernation state.
Our analysis comes to further confirm that T Pyxidis is now in a
short-lived peculiar phase of its evolution.
Support for this research was provided by NASA through grant number
HST-GO-17190.001-A to Villanova University from the Space Telescope Science
Institute, which is operated by AURA, Inc., under NASA contract
NAS 5-26555.
JLS acknowledges support from HST-GO-13400 and NSF AST-1816100.
We wish to thank the members of the AAVSO who monitored T Pyx in the months
preceeding and up to the HST visit to ensure the safety of the COS instrument
in case of a fast rise to outburst due to an unexpected nova eruption.
IRAF <cit.>,
tlusty (v203) synspec (v48) rotin (v4)
<cit.>,
PGPLOT (v5.2), Cygwin-X (Cygwin v1.7.16),
xmgrace (Grace v2), XV (v3.10)
ORCID iDs
Patrick Godon <https://orcid.org/0000-0002-4806-5319>
Edward M. Sion <https://orcid.org/0000-0003-4440-0551>
Robert E. Williams <https://orcid.org/0000-0002-3742-8460>
Matthew J. Darnley <https://orcid.org/0000-0003-0156-3377>
Jennifer L. Sokoloski <https://orcid.org/0000-0002-8286-8094>
Stephen S. Lawrence <https://orcid.org/0000-0002-7491-7052>
[Bode & Evans(2008)]bod08
Bode, M.F., & Evans, A. 2008, Classical Noave (2nd ed.; Cambridge:
Cambridge University Press)
[Chomiuk et al.(2014)]cho14
Chomiuk, L., Nelson, T., Mukai, K., Sokoloski, J.L., Rupen, M.P. et al. 2014, , 788, 130
[la Dous(1991)]lad91
la Dous, C., 1991, , 252, 100
[La Dous(1994)]lad94
La Dous, C. 1994, Space Science Reviews, Vol.67, p.1
[Darnley et al.(2017)]dar17
Darnley, M.H., Hounsell, R., Godon, P., et al. 2017, , 849, 96
[Duerbeck & Seitter(1979)]due79
Duerbeck, H.W., & Seitter, W.C. 1979, The Messenger, vol.17, p.1
[Fitzpatrick & Massa(2007)]fit07
Fitzpatrick, E.L, & Massa, D. 2007, , 663, 320
[De Gennaro et al.(2014)]deg14
De Gennaro, A., Shore, S.N., Schwartz, G.J., Mason, E., Starrfield, S. et al.
, 562, 28
[Gilmozzi & Selvelli(2007)]gil07
Gillmozzi, R., & Selvelli, P. 2007, , 46, 593
[Gilmozzi & Selvelli(2024)]gil24
Gilmozzi, R., & Selvelli, P. 2024, , 681, 83
[Godon & Sion(2023)]god23
Godon, P., & Sion, E.M. 2023, , 950, 139
[Godon et al.(2014)]god14
Godon, P., Sion, E.M., & Starrfield, S., Livio, M., Williams, R.E., et al. 2014, , 784, L33
[Godon et al.(2017)]god17
Godon, P., Sion, E.M., Balman, S., Blair, W.P. 2017, , 846, 52
[Godon et al.(2018)]god18
Godon, P., Sion, E.M., Williams, R.E., & Starrfield, S. 2018, , 862, 89
[Godon et al.(2020)]god20
Godon, P., Sion, E.M., Szkody, P., & Blair, W.P. 2020, , 494, 5244
[Goodman(1993)]goo93
Goodman, J. 1993, , 406, 596
[Hack et al.(1993)]hac93
Hack, M., Ladous, C., Jordan, S.D., et al. 1993,
Monograph Series on Nonthermal Phenomena in Stellar Astmospheres -
NASA SP, Paris: Centre National de la Recherche Scientifique;
Washington, DC.: NASA, 1993
[Hamilton et al.(2007)]ham07
Hamilton, R.T., Urban, J.A., Sion, EM. et al. 2007, , 667, 1139
[Harrison(2018)]har18
Harrison, T.E. 2018, , 861, 102
[Hillman et al.(2020)]hil20
Hillman, Y., Shara, M.M., Prialnik, D., Kovetz, A. 2020, Nature Astronomy, vol.4, p.886
[Hillman(2021)]hil21
Hillman, Y. 2021, , 505, 3260
[Hubeny & Lanz(2017a)]hub17a
Hubeny, I., & Lanz, T. 2017a, A Brief Introductory Guide to TLUSTY
and SYNSPEC, arXiv:1706.01859
[Hubeny & Lanz(2017b)]hub17b
Hubeny, I., & Lanz, T. 2017b, TLUSTY User's Guide II: Reference Manual,
arXiv:1706.01935
[Hubeny & Lanz(2017c)]hub17c
Hubeny, I., & Lanz, T. 2017c, TLUSTY User's Guide III: Operational Manual,
arXiv:1706.01937
[Hubeny & Long(2021)]hub21
Hubeny, I., & Long, K.S. 2021, , 503, 5534
[Livio & Pringle(2011)]liv11
Livio, M., & Pringle, J.E. 2011, , 740, L18
[Izzo et al.(2024)]izz24
Izzo, L., Pasquini, L., Aydi, E., Della Valle, M., Gilmozzi, R. et al. 2024, , 686 72
[Knigge(2019)]kni19
Knigge, C. 2019, private communication
[Knigge et al.(2000)]kni00
Knigge, C., King, A.R., Patterson, J. 2000, , 364, L75
[Knigge et al.(2022)]kni22
Knigge, C., Toonen, S., Boekholt, T.C.N. 2022, , 514, 1895
[Kramida et al.(2023)]kra23
Kramida, A., Ralchenko, Yu, Reader, J., and NIST Team (2023).
NIST Atomic Spectra Database (ver.5.11), [Online].
Available: https://physics.nist.gov/asd
National Institute of Standards and Technology, Gaithesburg, MD.
DOI: https//doi/org/10.18434/T4W30F
[Kromer et al.(2007)]kro07
Kromer, M., Nagel, T., Werner, K. 2007, , 475, 301
[Linnell et al.(2005)]lin05
Linnell, A.P., Szkody, P., Gänsicke, B.T., Long, K.S., Sion, E.M., et al. 2005,
, 624, 923
[Linnell et al.(2007)]lin07
Linnell, A.P., Godon, P., Hubeny, I., Sion, E.M., Szkody, P., 2007,
, 662, 1204
[Linnell et al.(2010)]lin10
Linnell, A.P., Godon, P., Hubeny, I., Sion, E.M., Szkody, P.,
2010, , 719, 271
[Long et al.(1991)]lon91
Long, K.S., Blair, W.P., Davidsen, A.F., Bowers, C.W., Van Dyke Dison W.,
et al. 1991, , 381, L25
[Long et al.(1994)]lon94
Long, K.S., Wade, R.A., Blair, W.P., Davidsen, A.F., Hubeny, I. 1994,
, 426, 704
[Matthews et al.(2015)]mat15
Matthews, J.H., Knigge, C., Long, K.S., Sim, S.A., Higginbottom, N.
2015, , 450, 3331
[Meyer-Hofmeister et al.(1997)]mey97
Meyer-Hofmeister, E., Schandl, S., & Meyer, F. 1997, , 321, 245
[Nixon & Pringle(2019)]nix19
Nixon, C.J., & Pringle, J.E. 2019, , 628, A121
[Paczyński(1965)]pac65
Paczyński, B. 1965, AcA, 15, 197
[Paczyński(1977)]pac77
Paczyński, B. 1977, , 216, 822
[Patterson(1984)]pat84
Patterson, J. 1984, , 54, 443
[Patterson et al.(1998)]pat98
Patterson, J., Kemp, J., Shambrook, A., et al. 1998, , 110, 380
[Patterson et al.(2017)]pat17
Patterson, J., Oksanen, A., Kemp, J. et al. 2017, , 466, 581
[Pringle(1981)]pri81
Pringle, J.E. 1981, ARA&A, 19, 137
[Puebla et al.(2007)]pue07
Puebla, R.E., Diaz, M.P., Hubeny, I. 2007, , 134, 1923
[Sasseen et al.(2002)]sas02
Sasseen, T.P, Hurwitz, M., Dixon, W.V., & Airieau, S. 2002, , 566, 267
[Savage & Mathis(1979)]sav79
Savage, B.D., & Mathis, J.S. 1979, , 17, 73
[Schatzman(1949)]sch49
Schatzman, E. 1949, Annales d'Astrophysique, 12, 281
[Selvelli et al.(2008)]sel08
Selvelli, P., Cassatella, A., Gilmozzi, R., & Gonzalez-Riestra, R. 2008, , 492, 787
[Selvelli & Gilmozzi(2013)]sel13
Selvelli, P., & Gilmozzi, R. 2013, , 560, 49
[Sembach et al.(2001)]sem01
Sembach, K.R., Howk, J.C., Savage, B.D., Shull, J.M., & Oegerle, W.R. 2001, , 561, 573
[Schaefer(2018)]sch18
Schaefer, B.E. 2018, , 481, 3033
[Schaefer et al.(2013)]sch13
Schaefer, B.E., Landolt, A.U., Linnolt, M. et al. 2013, , 773, 55
[Schaefer et al.(2010)]sch10
Schaefer, B.E., Pagnotta, A., & Shara, M. 2010, , 708, 381
[Sepinsky et al.(2007)]sep07
Sepinsky, J.F., Willems, B., Kalogera, V., & Rasio, F.A., 2007, , 667, 1170
[Sepinsky et al.(2009)]sep09
Sepinsky, J.F., Willems, B., Kalogera, V., & Rasio, F.A., 2009, , 702, 1387
[Sepinsky et al.(2010)]sep10
Sepinsky, J.F., Willems, B., Kalogera, V., & Rasio, F.A., 2010, , 724, 546
[Shakura & Sunyaev(1973)]sha73
Shakura, N.I., & Sunyaev, R.A. 1973, A&A, 24, 337
[Shara et al.(1986)]shara86
Shara, M.M., Livio, M., Moffat, A.R.J., Orio, M. 1986, ApJ, 311, 163
[Shara et al.(2018)]sha18
Shara, M.M., Prialnik, D., Hillman, Y., & Kovetz, A. 2018, , 860, 110
[Shore et al.(2011)]sho11
Shore, S.N., Augusteijn, T., Ederoclite, A., & Uthas, H. 2011, , 533, L8
[Sokoloski et al.(2013)]sok13
Sokoloski, J., Crotts, A.P.S., Lawrence, S., & Uthas, H. 2013, , 770, L33
[Starrfield et al.(2020)]sta20
Starrfield, S., Bose, M., Iliadis, C. et al. 2020, , 895, 70
[Starrfield et al.(1985)]sta85
Starrfield, S., Sparks, W.M., & Truran, J.W. 1985, , 291, 136
[Starrfield et al.(1972)]sta72
Starrfield, S., Truran, J.W., Sparks, W.M., & Kutter, G.S. 1972, , 176, 169
[Thorstensen et al.(2002)]tho02
Thorstensen, J.R., Fenton, W.H., Patterson, J., et al. 2002, , 114, 1117
[Tody(1993)]tod93
Tody, D. 1993, in ASP Conf. Ser. 52, Astronomical Data Analysis
Software and Systems II, ed. R.J. Hanisch, R.J.B. Brissenden,
& J. Barnes (San Fransisco, CA;ASP), 173
[Tofflemire et al.(2013)]tof13
Tofflemire, B.M., Orio, M., Page, K.L., et al. 2013, , 779, 22
[Uthas et al.(2010)]uth10
Uthas, H., Knigge, C., & Steeghs, D. 2010, , 409, 237
[Waagen et al.(2023)]waa23
Waagen, E., O'Meara, S., Poxon, M., Cyanmon, C. 2023, private communication
[Wade(1984)]wad84
Wade, R.A. 1984, , 208, 381
[Wade(1988)]wad88
Wade, R.A. 1988, , 335, 394
[Webbink et al.(1987)]web87
Webbink, R.F., Livio, M., Truran, J.W., & Orio, M. 1987, , 314, 653
[Whelan & Iben(1973)]whe73
Whelan, J., Iben, I. 1973, , 186, 1007
[Yaron et al.(2005)]yar05
Yaron, O., Prialnik, D., Shara, M.M., Kovetz, A. 2005, , 623, 398
[Zsidi et al.(2024)]zsi24
Zsidi, G., Nixon, C.J., Naylor, T., & Pringle, J.E. 2024, , in press
(arXiv:2406.03676)
|
http://arxiv.org/abs/2409.02517v1 | 20240904082554 | Training Universal Vocoders with Feature Smoothing-Based Augmentation Methods for High-Quality TTS Systems | [
"Jeongmin Liu",
"Eunwoo Song"
] | cs.SD | [
"cs.SD",
"cs.LG",
"eess.AS"
] |
8.710.6
Marco Mellia
================
§ ABSTRACT
While universal vocoders have achieved proficient waveform generation across diverse voices, their integration into text-to-speech (TTS) tasks often results in degraded synthetic quality.
To address this challenge, we present a novel augmentation technique for training universal vocoders.
Our training scheme randomly applies linear smoothing filters to input acoustic features, facilitating vocoder generalization across a wide range of smoothings.
It significantly mitigates the training-inference mismatch, enhancing the naturalness of synthetic output even when the acoustic model produces overly smoothed features.
Notably, our method is applicable to any vocoder without requiring architectural modifications or dependencies on specific acoustic models.
The experimental results validate the superiority of our vocoder over conventional methods, achieving 11.99% and 12.05% improvements in mean opinion scores when integrated with Tacotron 2 and FastSpeech 2 TTS acoustic models, respectively.
0
While universal vocoders have demonstrated proficient waveform generation across diverse voices, their integration into text-to-speech (TTS) tasks often results in degraded synthetic quality.
To address this challenge, we present a novel augmentation technique for training universal vocoders.
Our training scheme randomly applies linear smoothing filters to input acoustic features, facilitating vocoder generalization across a wide range of smoothings.
It significantly mitigates the training-inference mismatch, enhancing the naturalness of synthetic output even when the TTS acoustic model produces overly smoothed acoustic features.
Notably, our method is applicable to any neural vocoder without requiring architectural modifications or dependencies on specific acoustic models.
The experimental results validate the superiority of our vocoder over conventional methods, achieving 11.99% and 12.05% improvements in mean opinion scores when integrated with Tacotron 2 and FastSpeech 2 TTS acoustic models, respectively.
0
We present a novel feature smoothing augmentation technique for training universal vocoders, specifically aimed at mitigating the training-inference mismatch in text-to-speech (TTS) systems.
While universal vocoders have demonstrated proficient waveform generation across diverse voices, their integration into TTS frameworks often results in degraded synthetic quality.
This degradation mainly arises from the acoustic model's generation of smoothed acoustic features, inadequately covered during the vocoder's training.
To address this challenge, our proposed training method employs random augmentation of input acoustic features using linear filters.
This approach facilitates vocoder generalization across a wide range of smoothings, thereby enhancing the naturalness of synthetic output even when the acoustic model produces overly smoothed acoustic features.
Notably, our method is applicable to any neural vocoder without necessitating architectural modifications or dependencies on specific acoustic models.
Experimental results validate the superiority of our vocoder over conventional methods, achieving mean opinion scores of 4.11 and 3.44 when integrated with Tacotron 2 and FastSpeech 2 TTS acoustic models, respectively.
§ INTRODUCTION
Recent advancements in modeling capacity have led to the development of universal vocoders capable of generating high-fidelity speech waveforms.
Trained on diverse audio samples recorded from various environments, these vocoding models accommodate a wide spectrum of voices, languages, and styles <cit.>.
However, integrating them with text-to-speech (TTS) acoustic models remains challenging due to separate training processes, potentially resulting in degraded synthesis quality caused by the acoustic model's tendency to produce overly smoothed features.
One straightforward solution is to fine-tune the vocoder using acoustic features generated by the corresponding acoustic model <cit.>.
However, this approach compromises the vocoder's universality, because different acoustic models trained by different speakers or styles necessitate distinct fine-tuned vocoders, requiring substantial deployment resources and time.
Alternatively, fully end-to-end TTS models have been proposed <cit.>, whereby the joint training of the acoustic and vocoding models can avoid the training-inference mismatch problem.
However, this approach may lose the flexibility to control the acoustic characteristics of 0 (polisih) the output speech, as the model often relies on implicit latent representations instead of explicit acoustic features.
To address these limitations, we propose a novel feature augmentation strategy to preserve the vocoder's universality while mitigating quality loss within the TTS framework.
This method involves applying linear smoothing filters to input acoustic features, approximating their distributions with those generated by the acoustic model (i.e., smoothed along the time and frequency axes).
Specifically, the size of the smoothing filters is randomly selected for every training step, exposing the target vocoder to various levels of smoothing.
Consequently, the vocoder gains enhanced adaptability to the acoustic model's over-smoothing problem without necessitating fine-tuning or architectural modifications.
Our proposed method offers the advantage of applicability to any type of universal vocoder.
In particular, we focus on the UnivNet model
0
[
In the experiments, the effectiveness of the proposed method is also verified with the HiFi-GAN vocoder <cit.> as well.
]
<cit.>, a generative adversarial network (GAN)-based universal neural vocoder.
We enhance its performance with the additional harmonic-noise (HN) architecture <cit.> for stable harmonic production and two discriminators, namely, multi-scale short-time Fourier transform (MS-STFT) <cit.> and collaborative multi-band (CoMB) <cit.>, for improving training accuracy.
The experimental results demonstrate that our universal vocoder trained with the proposed method outperforms the traditional approaches across different TTS systems.
§ METHODS
§.§ Neural vocoders
GAN-based neural vocoders <cit.> are generative models that comprise a generator G and a discriminator D <cit.>.
As illustrated in Figure <ref>, these models are designed to learn the distribution of time-domain speech waveforms 𝐱 conditioned on their corresponding ground-truth acoustic features 𝐂, with the following objectives:
min_G 𝔼_𝐳∼𝒩(0, 𝐈), (𝐱,𝐂)[ L_G(𝐳, 𝐱, 𝐂) ],
min_D 𝔼_𝐳∼𝒩(0, 𝐈), (𝐱,𝐂)[ L_D(𝐳, 𝐱, 𝐂) ],
where L_G and L_D are the following losses of the generator and the discriminator, respectively:
L_G(𝐳, 𝐱, 𝐂)
= d[ D(G(𝐳 | 𝐂)), 1 ] + λ L_aux(G(𝐳 | 𝐂), 𝐱),
L_D(𝐳, 𝐱, 𝐂)
= d[ D(𝐱), 1 ] + d[ D(G(𝐳 |𝐂)), 0 ],
where d is a distance function (e.g., L_2 distance), and λ is the weight for an auxiliary loss L_aux.
§.§ Proposed smoothing augmentation method
Previous research has demonstrated the technical feasibility of universal vocoding through the utilization of numerous training samples <cit.>. However, 0 (polish) the TTS systems often struggle to produce high-quality speech due to the exposure bias problem.
This issue arises because the vocoder, having been trained solely on ground-truth mel-spectrograms, lacks robustness in handling the over-smoothing tendencies of acoustic models.
To address this challenge, we propose a smoothing augmentation method designed to mitigate the exposure bias problem effectively within the TTS framework.
As shown in Figure <ref>, the key idea is to train the universal vocoder conditioned on smoothed mel-spectrograms that closely mimic the distribution of those generated by the acoustic models.
To approximate the target distribution, we simulate the conditional acoustic feature by applying a linear smoothing filter, as follows:
𝐂̃=𝐇*𝐂,
where * denotes the 2-dimensional convolution operation and 𝐇 represents the smoothing filter designed to cover a range of smoothings from the acoustic models.
Instead of using 0 (polish) the ground-truth mel-spectrograms, the universal vocoder is conditioned on these smoothed features and optimized with modified objectives, as follows:
min_G 𝔼_𝐳, (𝐱, 𝐂̃)[ L_G(𝐳, 𝐱, 𝐂̃) ],
min_D 𝔼_𝐳, (𝐱, 𝐂̃)[ L_D(𝐳, 𝐱, 𝐂̃) ].
This process does not require any fine-tuning procedure or modification of the network architecture, allowing 0 (polish) for compact training without additional deployment resources.
Furthermore, as our training scheme exposes the vocoder to a diverse range of smoothings, the model becomes more generalized to the acoustic model's smoothing errors in the generation step. Thus, the entire TTS system is empowered to produce more natural speech outputs.
§.§.§ Filter design
Among various types of smoothing filters, i.e., 𝐇 in Equation (<ref>), we employ a 2-dimensional triangular[
In our preliminary experiments, it was noticeable that any type of linear LPF, e.g., rectangular LPF, could be used to compose 𝐇.
]
low-pass filter (LPF), defined as follows:
h_t,f
=⌈ l_t/2 ⌉ - | t - ⌈ l_t/2 ⌉|/⌈ l_t/2 ⌉^2 ·⌈ l_f/2 ⌉ - | f - ⌈ l_f/2 ⌉|/⌈ l_f/2 ⌉^2 ,
where l_t and l_f denote the filter sizes along 0 (polish) the time frame t and frequency bin f, respectively.
To enhance the vocoder's robustness across a diverse range of smoothings, it is crucial to randomly vary the filter sizes l_t and l_f for every training step.
Specifically, these parameters are randomly sampled based on the following distributions:
l_t ∼ p(l; N_t), l_f ∼ p(l; N_f),
where N_t and N_f denote the numbers of possible candidates for l_t and l_f, respectively;
p(l; N) denotes a distribution that is mostly uniform, except for the non-smoothing case (l=1), as follows:
p(l; N) =
p_g for l=1,
p_s for l ∈{ 3, 5, ⋯, 2N-1 },
where p_g+(N-1)p_s = 1.
While p_g and p_s can have the same value, our early experiments indicated that increasing p_g beyond p_s, such as p_g=2/3, yielded improvements in the vocoder's synthetic quality.
Notably, we set the filter sizes l_t and l_f to odd numbers to ensure symmetric filters.
§.§.§ Analysis of smoothing augmentation
To validate the generalization capacity of our proposed method, we examined the mel-spectral distance (MSD; dB), defined as the L2-norm of mel-spectral frames between the ground-truth mel-spectrogram and those predicted by the acoustic model or simulated by our smoothing filters.
In Figure <ref>, the orange dashed lines depict examples of MSD histograms corresponding to distinct sizes of smoothing filters.
The random sampling of these filter sizes during training guides the vocoder to accommodate various levels of smoothing.
We believe that this enables the vocoder to effectively manage the overly smoothed features generated by acoustic models such as Tacotron 2 <cit.>, as indicated by the blue solid line.
This observation is further supported by the mel-spectrograms presented in Figure <ref>, where the simulated features exhibit tendencies similar to those generated by the Tacotron 2 acoustic model.
0 (polish) in both low- and high-frequency regions 는 삭제함
§ EXPERIMENTS
§.§ Datasets
The universal vocoder was trained on an internal dataset comprising recordings in five languages (Korean, Japanese, Mandarin Chinese, English, and Spanish) spoken by 73 speakers.
The dataset contained 219,407 utterances (about 269 hours), with 5% reserved for validation.
Each waveform was sampled at 24 kHz and quantized by 16 bits.
For acoustic model training, we randomly selected four Korean speakers (two female, F1 and F2, and two male, M1 and M2) from the vocoder's training dataset (seen speakers) and two Korean speakers (one female, F3, and one male, M3) not included in the training set (unseen speakers).
The corpus for seen speakers comprised 4,600 utterances (about 7.7 hours), while the corpus for unseen speakers contained 2,400 utterances (about 3.6 hours).
Validation and testing utilized 6% and 3% of each corpus, respectively.
We extracted 100-dimensional mel-spectrograms, covering 0 (polish) from 0 to 12 kHz, with a 256-length frame shift and a 1,024-length Hann window.
Additionally, log-F0 and voicing flags, extracted using the PYIN algorithm <cit.>, were included to compose 102-dimensional acoustic features.
Before training, all acoustic features were globally normalized using the mean and variance of the training set.
§.§ Model details
§.§.§ Vocoding model
Despite the availability of many state-of-the-art vocoders, we opted for the UnivNet model <cit.>, thanks to its competitive synthetic quality and fast generation speed.
Our model followed the overall setup of the original UnivNet-c16 model[
We used an open-source implementation at the following URL:
<https://github.com/maum-ai/univnet/>.
], but we enhanced it by incorporating the following techniques, as depicted in Figure 4b:
First, we introduced a harmonic-noise (HN) model into the generator <cit.>.
The model received three inputs composed of F0-dependent sinusoidal, Gaussian noise, and a sequence of voicing information to enable the generator to efficiently learn the periodic and aperiodic behavior of the target waveform.
We employed U-Net-style downsampling blocks <cit.> to align the sample-level inputs with the frame-level mel-spectrogram.
Second, the discriminators were replaced with MS-STFT and CoMB discriminators, extending the vocoder's capabilities to capture complex audio features.
The MS-STFT discriminator <cit.> facilitated analysis in the complex STFT domain, while the CoMB discriminator <cit.> allowed 0 (polish) for the capturing of the frequency band-wise periodic attributes of the target voice.
These modifications were integrated into the model 0 (polish) , following their official implementations <cit.>.
Additionally, adjustments were made to the upsampling ratios of the upsampling blocks from {8,8,4} to {8,8,2,2} 0 (polish), to connect the last two blocks to the CoMB discriminator.
Last, all activation functions in the generator were replaced with the Snake function <cit.>, known for its effectiveness in handling periodic signals <cit.>.
We defined our enhanced vocoder as an eUnivNet for the remaining parts of the experiments.
The generator and discriminators were trained for 600k steps using the AdamW optimizer <cit.>.
During training, the model was conditioned on the ground-truth acoustic features for the first 450k steps and the features with randomly smoothed mel-spectrograms derived from our proposed method for the remaining 150k steps.
When sampling the size of smoothing filters in Equation (<ref>), we used six and three candidates for N_t and N_f, respectively.
Additional training parameters included a weight decay of 0.01, a batch size of 32, and an exponentially decayed learning rate from 10^-4 with a decay rate of 0.99 per epoch.
§.§.§ Acoustic model
To generate acoustic features from the text, we employed Tacotron 2 (T2) with a phoneme-alignment approach <cit.> due to its stable generation and competitive synthetic quality.
The model received 364-dimensional phoneme-level linguistic features as inputs and predicted the corresponding phoneme duration through a combination of three fully connected layers and one long short-term memory (LSTM) network.
By utilizing this predicted duration, the linguistic features were upsampled to the frame 0 (polish) - level and transformed into high-level context features via three convolutional layers, followed by a bi-directional LSTM network.
Consequently, the T2 decoder autoregressively decoded those context features to reconstruct the target acoustic features.
The model was initialized by Xavier initializer <cit.> and trained by Adam optimizer <cit.>.
To faithfully evaluate the generalization capacity of the proposed method, we included a FastSpeech 2 (FS2) acoustic model <cit.> as the baseline.
The FS2 model was trained similarly to the T2 model, but it used non-autoregressive encoder and decoder.
More detailed setups for training the T2 and FS2 models were given in 0 (polish) the conventional work <cit.>.
0
§.§ Evaluations
We performed naturalness mean opinion score (MOS) tests to evaluate the TTS quality of the proposed method.
0
For each speaker, we randomly selected ten utterances from the test set and synthesized speech samples via TTS systems with different universal vocoders as follows:
* eUnivNet: Proposed enhanced UnivNet with both the HN-generator and the MS-STFT/Co-MB discriminators
* eUnivNet-G: eUnivNet only with the HN-generator
* eUnivNet-D: eUnivNet only with the MS-STFT/Co-MB discriminators
* UnivNet: Baseline vanilla UnivNet-c32[
We used an open source implementation at the following URL:
<https://github.com/maum-ai/univnet/>
] model <cit.>
* HiFi-GAN: Baseline HiFi-GAN V1 model <cit.>.
Twenty native Korean listeners were asked to rate the synthetic quality (in total, 10 utterances × 4 speakers × 20 listeners = 800 hits for each system) using the 5-point responses: 1=Bad, 2=Poor, 3=Fair, 4=Good, and 5=Excellent.
Twenty native Korean listeners were asked to rate the synthetic quality using 0 (polish) the 5-point responses: 1=Bad, 2=Poor, 3=Fair, 4=Good, and 5=Excellent.
For each speaker, we randomly selected 10 utterances[
Generated audio samples are available at the following URL:
<https://sytronik.github.io/demos/voc_smth_aug>.
] from the test set (in total, 10 utterances×4 speakers×20 listeners=800 hits for each system).
The speech samples were synthesized by the different vocoders, as described in Table <ref>.
Table <ref> shows the evaluation results with respect to various systems, and the analytic results are summarized as follows:
Compared to the vanilla UnivNet-c32 (S6), our eUnivNet model (S3) performed significantly better despite having half the number of convolutional channels (e.g., 32 vs. 16).
This highlights the importance of employing the HN-generator (S4) and the MS-STFT/CoMB discriminators (S5) to improve 0 (polish) the synthetic quality.
Noteworthily, although the overall MOS score of the eUnivNet-HN-G model was lower than that of the vanilla UnivNet-c32, it offered benefits in terms of lower model size (36.0%) and fast inference speed (172.8%), as described in Table <ref>.
Among the eUnivNet-based vocoders (S1, S2, and S3), the conventional training method (S1), i.e., separated from the acoustic model, performed the worst due to the exposure bias problem.
Fine-tuning the vocoder with the generated features (S2) significantly addressed this limitation, but required time-consuming deployment resources, such as generating all features in the training set and retraining the vocoder speaker-dependently.
Conversely, a single model trained solely with our smoothing augmentation method (S3) provided competitive synthetic quality without any dependency related to the speaker or acoustic model.
The generalized performance was verified by changing the acoustic model (S7 vs. S8) and the vocoder (S9 vs. S10). The proposed method significantly improved the naturalness of synthesized speech.
This trend was also observed in the other experiments in Table <ref>, where the proposed method robustly generated unseen speakers' voices compared to conventional methods.
§ CONCLUSION
This paper proposes a novel feature smoothing augmentation method for training universal vocoders aimed at mitigating the mismatch between the acoustic model and the vocoder within the TTS framework.
Our method introduced 0 (polish) the random linear filters to augment acoustic features, thereby approximating their distributions to those generated by acoustic models.
This approach enhances the generalization capacity of universal vocoders, enabling the generation of high-quality speech outputs even when the acoustic model produces overly smoothed features.
The experimental results verified the superiority of our vocoder over 0 (polish) the conventional methods.
Future research directions should explore extending this framework to other generation tasks, such as singing voice synthesis and music/audio generation.
§ ACKNOWLEDGEMENTS
We would like to thank Min-Jae Hwang, Meta AI, Seattle, WA, USA, for the helpful discussion.
This work was supported by Voice, NAVER Cloud Corp., Seongnam, Korea.
IEEEtran
|
http://arxiv.org/abs/2409.02774v1 | 20240904145310 | Perspective: Floquet engineering topological states from effective models towards realistic materials | [
"Fangyang Zhan",
"Rui Chen",
"Zhen Ning",
"Da-Shuai Ma",
"Da-Shuai Ma",
"Dong-Hui Xu",
"Rui Wang"
] | physics.comp-ph | [
"physics.comp-ph",
"cond-mat.mes-hall"
] |
Article Title]Perspective: Floquet engineering topological states from effective models towards realistic materials
1,2]Fangyang Zhan
These authors contributed equally to this work.
3]Rui Chen
These authors contributed equally to this work.
1,2]Zhen Ning
1,2]Da-Shuai Ma
1,2]Ziming Wang
[1,2]Dong-Hui [email protected]
[1,2]Rui [email protected]
[1]Institute for Structure and Function & Department of Physics & Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing, 400044, P. R. China
[2]Center of Quantum Materials and Devices, Chongqing University, Chongqing, 400044, P. R. China
[3]Department of Physics, Hubei University, Wuhan, 430062, P. R. China
With significant advances in classifying and cataloguing topological matter, the focus of topological physics has shifted towards quantum control, particularly the creation and manipulation of topological phases of matter. Floquet engineering, the concept of tailoring a system by periodic fields, offers a powerful tool to manipulate electronic properties of condensed systems, and even to create exotic non-equilibrium topological states that are impossibly present in equilibrium scenarios. In this perspective, we give a brief review of recent progress in theoretical investigations of Floquet engineering topological states from effective models towards realistic materials. We show that light irradiation can realize various desired topological states through the introduction of symmetry breaking, such as first- and higher-order Weyl fermions, quadrupole topological insulator with periodic driving and disorder, quantum anomalous Hall effects with a tunable Chern number, as well as beyond. Moreover, based on first-principles calculations and Floquet theorem, we show several realistic material candidates proposed as potential hosts for promising Floquet topological states, facilitating their verification in experiments. We believe that our perspective on Floquet engineering of topological states will advance further studies of rich exotic light-induced phenomena in condensed matter physics.
[
*
September 9, 2024
=====================
§ INTRODUCTION
In the past decades, the study of topological phases of matter has been a significant subject <cit.>. Due to their profound significance and extensive potential for next-generation devices with ultralow power dissipation, topological materials have emerged as a frontier in fields of condensed matter physics as well as material science. Recent research has seen a surge in classifying and realizing topological states in crystalline materials, thanks to progress in symmetry-based theoretical frameworks and computational methods <cit.>. Various catalogues of topological materials, such as nonmagnetic and magnetic topological electronic materials <cit.>, topological phononic materials <cit.>, and topological superconductors <cit.>, have been successively established. More recently, by further utilizing symmetry arguments, researchers have comprehensively constructed the effective models of all magnetic space groups <cit.> and established the encyclopedia of emergent quasiparticles in three-dimensional crystals <cit.>, strongly facilitating the development of topological physics. However, the current focus mainly lies in investigating equilibrium states of topological physics, while studies on topological physics in non-equilibrium scenarios are still in the infancy. With the increasing achievements of topological states and topological materials, performing investigations of quantum control such as creation and manipulation of topological states is an imperative task.
Light-matter interaction is an important approach to dynamically modulate material properties on ultrafast timescales, enabling the creation of exotic non-equilibrium topological states that are otherwise not possible in the equilibrium cases <cit.>. Among various mechanisms of light-matter interaction, of particular interest is the concept of Floquet engineering. Within the framework of Floquet theory <cit.>, periodic light field transfers Bloch energy bands of crystalline solids to periodic Floquet-Bloch sidebands through multiphoton absorption or emission. Thus, light driving offers a mean to manipulate electronic structures, enabling a great potential to control electronic topology out of equilibrium in materials. By appropriately selecting incident light that matches the target system, Floquet engineering can provide a wide range of pathways for dynamically manipulating topological states and even inducing topological phase transitions. For instance, light can gap out the Dirac cone in graphene or drive a band inversion in semiconductor quantum wells, thereby developing the notion of a Floquet topological insulator (FTI) <cit.>. The topologically nontrivial band gap can also be induced from avoided crossings of photon-dressed Floquet sidebands via the optical Stark effect <cit.>.
The Floquet topological phases are strongly dependent on the drive frequency, amplitude, and polarization of incident light, exhibiting high tunability. It has been demonstrated that irradiation of circularly polarized light (CPL) can break the time-reversal (𝒯) symmetry, and thus quantum anomalous Hall (QAH) insulators with a non-zero Chern number can be obtained under irradiation of CPL <cit.>. Unlike the QAH effect that is present in magnetic materials, the Floquet QAH effect does not require initial magnetism and can generally be present in light-irradiated magnetic and nonmagnetic materials <cit.>. Besides, Floquet engineering can give rise to Chern flat bands with tunable large Chern numbers in twisted systems irradiated by CPL <cit.>. Inspired by the development of conventional FTI with the dipole polarization, it is found that periodic driving can further lead to Floquet higher-order topological insulators with multipole polarization <cit.>. Beyond gapped topological phases, tailoring topological semimetallic phases with first-order or higher-order topology through light irradiation have also been intensely studied <cit.>. Through changing the propagation or polarization direction of incident light, light control of symmetry breaking can be accordingly achieved, and periodic driving via light irradiation offers a fascinating avenue to realize desired gapless topological fermions. The typical examples are the Floquet Weyl semimetals (WSMs) with highly controllable Weyl nodes, which can be generated in light-irradiated topological insulators <cit.>, Dirac semimetals (DSMs) <cit.>, nodal-line semimetals (NLSMs) <cit.>.
On the other hand, disorder plays an important role on the observation of topological edge or surface transport in a realistic topological system <cit.>, and even can induce a phase transition from a topologically trivial insulator to a topological Anderson insulator (TAI) phase <cit.>. In periodically driven systems, it has been demonstrated that disorder can induce topological phases that go beyond the well established paradigm of static disorder-induced topological phases <cit.>. The topological phase arises from the interplay of disorder and periodic driving, dubbed a Floquet TAI. Experimentally, the Floquet topological sates as well as their emerged exotic properties can be captured by time-resolved and angle-resolved photoemission spectroscopy (TrARPES) or time-resolved transport measurements <cit.>. For instance, the Floquet sidebands and CPL-induced gap of topological surface states were observed in three-dimensional topological insulator Bi_2Se_3 by using TrARPES <cit.>. Through the application of ultrafast time-resolved transport measurements using a laser-triggered ultrafast photoconductive switch, the light-induced anomalous Hall effect was confirmed in CPL-driven graphene <cit.>. Recently, the Floquet band engineering in a semiconductor has achieved important progress in experiments, such as pseudospin-selective Floquet sidebands in black phosphorus <cit.>, and optical control of valley polarization in semiconductors MoS_2 <cit.> and BN <cit.>.
Overall, design and control of topological states via Floquet engineering has gradually become an attractive focus in topological physics, with rich exotic phenomena and promising application prospects. To date, numerous representative works have been proposed theoretically, suggesting rich light-induced topological phases based on effective modes, thereby significantly advancing studies of Floquet engineering in condensed-matter community <cit.>. However, only a few experimental evidences have confirmed light-induced topological sates and phase transitions in periodically driven systems <cit.>. The exploration of realistic material candidates that can realize these theoretical effective models is relatively slow. Therefore, to drive Floquet engineering forward by laying down a foundation for experiments, the combination of first-principles calculations and Floquet theorem is an effective mean to predict novel topological properties in realistic material candidates. In comparison with effective model calculations, the first-principles based approach can map momentum- and spin- resolved Floquet-Bloch bands in whole Brillouin zone of solids, and further depict complex and entangled band manifolds on ultrafast timescales <cit.>. Therefore, this perspective will provide a brief review of recent progress in theoretical investigations of Floquet engineering topological states from effective models towards realistic materials. In particular, most of material candidates are well studied in literature, facilitating the realization of Floquet engineering topological states and their device design in experiments.
§ BASIC METHOD OF FLOQUET ENGINEERING
In this section, we review theoretical formalisms and computational methods for Floquet engineering electronic states in crystalline materials under irradiation of time-periodic light fields.
§.§ Floquet-Bloch Hamiltonian
Firstly, we consider a system driven by a time-periodic and space-homogeneous light field with a Bloch Hamiltonian H(𝐤) with crystal momentum 𝐤. The light field can be described as a vector potential as 𝐀(t)=𝐀(t+T) with period T=2π/ω and the light frequency ω, and then the polarized electric field is 𝐄(t)=-∂_t𝐀(t). In the presence of periodic drive, we obtain a time-dependent Hamiltonian H(𝐤,t)=H[𝐤+𝐀(t)]. When the Hamiltonian is time periodic, the Floquet theorem <cit.> allows us to map it to a time-independent Hamiltonian.
Specially, the Bloch wavefunction |Ψ(𝐤)⟩ develops into the time-dependent formalism as
|Ψ(𝐤,t)⟩=exp[-iε(𝐤)t]|Φ(𝐤,t)⟩
with time-periodic auxiliary function |Φ(𝐤,t)⟩=|Φ(𝐤,t+T)⟩, which can be expanded in discrete Fourier series as
|Φ(𝐤,t)⟩=Σ_α e^-αω t|u^α(𝐤)⟩,
where α∈(-∞,+∞) is an integer and termed as the Floquet index. Besides, the electronic wavefunction |Ψ(𝐤,t)⟩ in light-driven
crystals is determined through the time-dependent Schrödinger equation as
i∂/∂ t|Ψ(𝐤,t)⟩=H(𝐤,t)|Ψ(𝐤,t)⟩.
Combining with Eqs. (<ref>) and (<ref>), the time-dependent Schrödinger equation can be transformed into a series of time-independent equations as
Σ_αH^mn(𝐤)|u^α(𝐤)⟩=[ε(𝐤)+βħω]|u^β(𝐤)⟩,
with
H^α-β(𝐤)=1/T∫_0^TH(𝐤,t)e^-i(α-β)ω tdt,
Here, the time-independent square matrix H^(α-β)(𝐤) is dubbed as Floquet-Bloch Hamiltonian. With this representation, the time-dependent
Schrödinger equation is mapped to an eigenvalue problem in an extended Hilbert space. The eigenvalue ε(𝐤) is the energy of Floquet-Bloch states in a system with periodic driving, dubbed Floquet-Bloch band structures. In fact, eigenvalues of ε(𝐤) and ε(𝐤)+nħω represent the same Floquet state, and thus we could define the first Floquet Brillouin zone in (-ħω/2, +ħω/2). The states beyond the first Floquet Brillouin zone can be obtained through multiphoton absorption or emission from states inside the first Floquet Brillouin zone. Overall, the Floquet-Bloch bands form a series of photo-dressed replica bands, which can be deformed by light irradiation and induce hybridization with the folded bands at the first Floquet Brillouin zone boundary, resulting in the electronic and topological properties changed by coupling with light fields.
§.§ Floquet Hamiltonian in tight-binding Wannier function
To reveal the light-induced modification of crystalline materials, it is required to carry out first-principles calculations to obtain the basis of plane waves. By projecting plane waves of
Bloch states onto localized Wannier basis using
the WANNIER90 package <cit.>, we constructed real-space tight-binding Wannier Hamiltonian as
H^W=∑_m,n,𝐑,𝐑't_mn(𝐑-𝐑')C_m^†(𝐑)C_n(𝐑')+h.c.,
where 𝐑 and 𝐑' are lattice vectors, (m,n) is the index of Wannier orbitals, t_mn(𝐑-𝐑') are the hopping integrals between Wannier orbital m at site 𝐑 and Wannier orbital n at site 𝐑', and C_m^†(𝐑) or C_m (𝐑) creates or annihilates an electron of Wannier orbital m on site 𝐑.
When a time-periodic and space-homogeneous monochromatic light field is applied to a material, the time-dependent hopping is obtained by using the Peierls substitution <cit.>,
t_mn(𝐑-𝐑', τ)=t_mn(𝐑-𝐑')e^ie/ħ𝐀(τ)·𝐝_mn,
where 𝐀(τ) is the time-dependent vector potential of an applied light-field, and 𝐝_mn is the related position vector between Wannier orbital m at site 𝐑 and Wannier orbital n at site 𝐑'. The corresponding light-driven operator is C_m(𝐑, τ)= ∑_α=-∞^∞ C_α m(𝐑)e^iαωτ with the Floquet operator C_α m(𝐑) <cit.>. In this case, the time-dependent H^W(τ) hosts both lattice and time translational symmetries, so we can map it onto a time-independent Hamiltonian according to the Floquet theory <cit.>. By carrying out a dual Fourier transformation, the static Floquet Hamiltonian can be expressed as
H^F(𝐤, ω)=∑_m, n∑_α, β[H_mn^α-β(𝐤, ω)+αħωδ_mnδ_αβ]C_α m^†(𝐤)C_β n(𝐤)+h.c.,
where ω is the frequency of an incident light and thus ħω represents the energy of photon, and the matrix H_mn^α-β(𝐤, ω) can be obtained by Wannier Hamiltonian as
H_mn^α-β(𝐤, ω)=∑_𝐑∑_𝐑'e^i𝐤· (𝐑-𝐑')(1/T∫_0^Tt_mn(𝐑-𝐑')e^ie/ħ𝐀(τ)·𝐝_mne^i(α-β)ωτdτ).
According to Eq. (<ref>), one can use first-principles calculations based on density functional theory to quantitatively simulate evolution of electronic properties in light-matter coupled materials. Beyond effective models, the Floquet-Wannier Hamiltonian obtained from density functional theory calculations with Floquet theorem can predict the specific light-driven electronic topology of crystalline materials in nonequilibrium.
§ LIGHT-DRIVEN TOPOLOGICAL SEMIMETALLIC STATES FROM EFFECTIVE MODELS
In this section, we review the recent developments in light-driven topological phases from different kinds of topological semimetallic phases based on effective models.
§.§ Light-driven type-I, type-II, and hybrid NLSMs
The general model Hamiltonian of the undriven NLSM with a single nodal ring has the form <cit.>,
H_0=c_ik_i^2σ_0+( m_0-m_ik_i^2) σ
_z+v_yk_yσ_y,
where m_0, m_i (i=x,y,z) and c_i are model parameters, v_y is the
velocity along the y-axis, k_i are the crystal momenta, σ_i are
Pauli matrices and σ_0 is the identity matrix. Here, Einstein's
summation convention is used, where the repeated indices imply the summation.
Depending on the parameters, the NLSM can be categorized into three types, as illustrated in Figs. <ref>(a)-<ref>(c). In Fig. <ref>(a), where the tilt is weak, the band touching exhibits a type-I nodal ring. Conversely,
Fig. <ref>(c) shows that with strong tilt, both bands align in the same direction, forming a type-II nodal ring at their intersection.
Figure <ref>(b) presents the band spectrum for a hybrid NLSM, revealing that the tilt ratio is smaller near the k_z-axis and larger near the k_x-axis.
To study the interaction of NLSMs with light, a time-dependent
vector potential 𝐀( t) =𝐀( t+T) is considered,
which is a periodic function with a period of T=2π/ω. Applying Floquet theory <cit.>
in the high-frequency limit, the periodically driven system can be
described by a static effective Hamiltonian given by <cit.>
H_eff=H_0,0+[ H_0,-1,H_0,1]/ħω+O( A_L^4),
where ω and A_L describe the frequency and amplitude of light, H_m,n=1/T∫_0^TH (t) e^i(m-n)ω tdt is the discrete Fourier components of the Hamiltonian.
When the light propagates along the x-axis, 𝐀(t) is
given by 𝐀=A_L(0,cosω t,ηsinω t),
where η=± 1 indicates the chiralities of CPL. From Eq. (<ref>), the Floquet correction is
Δ H^x=-A_L^2/2(m_y+m_z)σ_z-Lm_zk_zσ_x,
with L=2η A_L^2v_y/(ħω). In the presence of the light, the coupling term gaps out the nodal ring except at two Weyl points ±𝐤_0=( ±√(m̃_0/m_z),0,0) with m̃_0=m_0-A_L^2( m_y+m_z)/2.
The results indicate that a light traveling along the x-axis gaps out
the nodal ring, leaving a pair of Weyl nodes and causing the system enters into a WSM
phase <cit.>. However, the type of the Weyl nodes is independent of the intensity
and the frequency of the incident light, suggesting that a type-II
Floquet WSM state arises by driving the type-II NLSM with a light along the x-axis.
The band spectrum of the driven type-I NLSM [Fig. <ref>(d)] shows the type-I Weyl nodes. The bulk band spectra of the driven hybrid NLSM and the driven type-II NLSM are depicted in Figs. <ref>(e) and <ref>(f), respectively.
Besides, it has been shown that the type of Floquet WSM phases depends on the orientation of the incident light <cit.>. When the incident light propagates along the x-axis or along the z-axis, a type-II NLSM is converted into a type-II WSM
, while for a driven hybrid NLSM, depending on the tilt direction, the photoinduced Floquet WSM could be of type-I [Fig. <ref>(b)] or type-II [Fig. <ref>(d)]. When the applied light propagates along the y-axis, only the positions of nodal rings change Fig. <ref>(c)
. Surprisingly, by rotating the incident light on the x-z plane, both type-I and type-II WSMs can be realized by tuning the driving angle and amplitude Fig. <ref>(e)
. For the sake of comparison, the Figs. <ref>(a)-<ref>(e) also give the Floquet states of driven type-I NLSMs by a CPL, which show different features from those of type-II and hybrid NLSMs. Furthermore, the anomalous Hall effects of these photoinduced Floquet WSM phases are also investigated by use of the Kubo formula <cit.>.
§.§ Light-induced higher-order WSM phases
In this section, we show that CPL can also induce higher-order WSM phases that support both surface Fermi arcs and hinge Fermi arcs in a higher-order NLSM <cit.> or a higher-order DSMs <cit.>.
The Hamiltonian for the NLSM has the form <cit.>,
H(𝐤) = im(Γ_1 Γ_4 + Γ_2 Γ_4) + tsink_xΓ_1 + tsink_yΓ_2
+ [M-t(cosk_x+cosk_y+cosk_z)] Γ_3,
where the Dirac matrices are defined as Γ_1=σ_0 τ_3, Γ_2=σ_2 τ_2, Γ_3=σ_0 τ_1, Γ_4=σ_1 τ_2, where σ_j and τ_j (j=1,2,3) are Pauli matrices labeling the sublattice and layer degrees of freedom, σ_0 and τ_0 are identity matrices. t is the amplitudes of hoppings, M is the Dirac mass, and m is the additional mass that generates higher-order topology. The system supports two bulk nodal rings, two-dimensional (2D) drumhead surface states, and one-dimensional hinge Fermi arc states.
This model can describe the higher-order NLSM materials as XTe_2 (X=Mo, W) <cit.> and 3D ABC stacked graphdiyne <cit.>.
Figure <ref>(a) illustrates two mirror-protected bulk nodal rings on the k_n-k_z mirror plane with 𝐤_𝐧 along the k_x=-k_y axis in the first Brillouin zone. The drumhead surface states depicted in Fig. <ref>(b) are the projection of bulk nodal rings on the k_y-k_z plane. Figure <ref>(c) demonstrates the hinge Fermi arc states, which are located on two mirror-symmetric off-diagonal hinges.
When the CPL propagates along the z-axis, 𝐀(t) is given by 𝐀=A_L(ηsinω t, cosω t,0). The effective Hamiltonian of the driven higher-order NLSM can be found in Ref. <cit.>.
The light irradiation breaks both the 𝒯-symmetry and chiral symmetry, gapping out the nodal rings and leaving a pair of Weyl nodes, as shown in Fig. <ref>(d). Figure <ref>(e) demonstrates that the surface drumhead states are replaced by surface Fermi arc. However, the mirror symmetry and the product of time-reversal and chiral symmetries are still preserved, which protect the higher-order hinge Fermi arcs shown in Fig. <ref>(f). The above results indicate that the driven system turns into a higher-order WSM, which supports both first-order surface Fermi arc and second-order hinge Fermi arc states.
Moreover, when light propagates along the other axis, CPL can always drive a higher-order NLSM to a higher-order WSM. More importantly, it is found that the propagation axis of CPL can control the location of the Weyl nodes.
A similar conclusion is found in the CPL-driven higher-order DSM <cit.>. The model describing an undriven higher-order DSM model is given by <cit.>:
H(𝐤) = ϵ_0(𝐤) + λsink_xΓ_1^' + λsink_yΓ_2^'
+ M(𝐤)Γ_3^' + G(𝐤)Γ_4^',
where ϵ_0(𝐤)=t_1 (cosk_z-cosK_z^0)+t_2(cosk_x+cosk_y-2), M(𝐤)=t_z (cosk_z-cosK_z^0)+t(cosk_x+cosk_y-2), Γ_1^'=s_3 ρ_1, Γ_2^'=s_0 ρ_2, Γ_3^'=s_0 ρ_3, Γ_4^'=s_1 ρ_1, Γ_5^'=s_2 ρ_1, s_j and ρ_j (j=1,2,3) are Pauli matrices denoting the spin and orbital degrees of freedom, and s_0 and ρ_0 are identity matrices. This model can describe the higher-order DSM materials including but not limited to Cd_3As_2 and KMgBi <cit.>. Here, t, λ, and t_1,2,z are the amplitudes of hoppings. The two Dirac cones are located at 𝐤=(0,0,± K_z^0), as shown at the boundary of Fig. <ref>(a).
G(𝐤)=g(cosk_x-cosk_y)sink_z represents the higher-order topological term, which gives rise to second-order hinge Fermi arc states as displayed in Fig. <ref>(b)-<ref>(c).
This system has different surface states, as demonstrated in Fig. <ref>(a). The closed Fermi ring, instead of helical Fermi arc states, emerges in the surface Brillouin zone <cit.> when the Fermi energy cuts through the surface Dirac cone.
The CPL also drives the higher-order DSM into a Floquet WSM, separating each Dirac point into a pair of Weyl points. These Weyl points host surface Fermi arc states that connect the projections of each pair of Weyl nodes, as depicted in Fig. <ref>(d)-<ref>(e). Moreover, Fig. <ref>(f) shows that the Floquet WSM also hosts hinge Fermi arc states, terminated by the projection of two adjacent Weyl nodes from two different pairs.
Moreover, CPL can drive the tilted Weyl cones in Floquet WSMs, when the axis of light propagation is changed. On the other hand, the higher-order term also can be written as G(𝐤)=g(cosk_x-cosk_y); this term also breaks the four-fold rotational symmetry, but preserves the effective parity-time reversal (𝒫𝒯) symmetry. In this case, the CPL plays a similar role in inducing the topological phase in the higher-order DSM.
The above content focuses on the higher-order topological semimetals with the four-fold rotational symmetry. In the higher-order DSM with the six-fold rotational symmetry, CPL irradiation can also produce Floquet WSM that supports both first-order surface Fermi arc and higher-order hinge Fermi arc states, and the location of Weyl nodes depends on the propagation direction of angle of incidence of CPL <cit.>. Furthermore, CPL can also control the degree of tilt of the
resulting Weyl cones by adjusting the incident direction of the CPL, enabling the realization of different types of WSMs.
§ TOPOLOGICAL STATES INDUCED BY THE INTERPLAY OF PERIODIC DRIVING AND DISORDER
The investigation of the condensed matter systems in the presence of disorder is a longstanding research area of fundamental importance. For instance, the disorder is crucial for the observation of integer quantized Hall plateau <cit.>. Over the past two decades, significant progress has been made in the study of disorder effects in topological systems <cit.>. Surprisingly, despite the presence of disorder-induced localization effects, the disorder may also endow a system with non-trivial topological states i.e., the TAI <cit.>.
The interplay of disorder and periodic driving can give rise to rich topological phenomena <cit.>. In this section, we review the recent works about topological states induced by the interplay of periodic driving and disorder.
§.§ Light-induced QAH effects in disordered systems
In the original version of TAI, 𝒯-symmetry is preserved and the phenomenology is similar to the quantum spin Hall effect. Under the breaking of 𝒯-symmetry, the QAH state can be realized <cit.>. A natural analogy is as follows: Can we induce the QAH effect in disordered systems? In the following, we briefly review that the QAH state arises from the interplay of light and nonmagnetic disorder. Here, a strategy was proposed to induce QAH in disordered systems <cit.>. The idea is to explore systems in which bulk topology is driven by the nonmagnetic disorder, and the spin degeneracy is lifted under irradiation of light fields, as shown in Fig. <ref>. As the light intensity increases beyond a critical value k_A^c, the energy gap of one spin sector first closes and then reopens, while the other spin sector still possesses the nontrivial topology. In this case, the system evolves into the QAH phase with one gapless chiral edge channel [Figs. <ref>(b)-(d)].
The four-band Bernevig-Hughes-Zhang (BHZ) effective Hamiltonian <cit.> can be used to reveal the light-modulated topological phases mentioned above.
h_s(𝐤) = d_0(𝐤)τ_0+𝐝_s(𝐤) ·σ,
where the index s=± denotes spin, or the inequivalent valleys ± K due to spin-valley locking in morié superlattices. For the low-energy effective minimal model, we have d_0(𝐤) = -Dk^2 and 𝐝_s(𝐤) = (sAk_x,Ak_y,M-Bk^2), where M is the Dirac mass term depicting the band inversion, the other parameters A, B, D can be obtained from experiments. The sign of Dirac mass term is crucial to characterize the topological phase transition from normal insulator (M>0) to topological insulator (M<0).
In the presence of CPL and disorder, the Hamiltonian becomes time-dependent and the translational symmetry is broken by lattice disorder. However, a simple picture can still be obtained by the following approximations. In off-resonant regime (i.e., high frequency), the time-independent effective Hamiltonian can be approximately obtained through the Magnus expansion <cit.>. On the other hand, the effects of disorder can be taken into account by effective medium theory, such as the self-consistent Born approximation (SCBA) <cit.>. After considering these approximations, the renormalized Dirac mass can be expressed as:
M^eff_s = M - Δ^cpl_s(k_A) - Σ^dis_s(γ),
where the second term on the right hand side of Eq. (<ref>) Δ^cpl_s(k_A) is a function of light intensity k_A induced by CPL. The third term Σ^dis_s(γ) is disorder induced self-energy which can be calculated by SCBA for a given disorder strength γ. It is noted that both corrections Δ^cpl_s, Σ^dis_s are spin-dependent, leading to two spin sectors with different responses to CPL.
The effective mass M^eff_s can be extracted from the width of the quasi-band band gap, as shown in Fig. <ref>(e). If the disorder is too weak to induce the TAI phase, the increasing of the light-intensity k_A cannot induce any topological phase transitions [Fig. <ref>(f)]. Therefore, one need a moderate strength of disorder to drive the system into TAI. Specifically, as shown in Fig. <ref>(g), the effective mass is negative in the absence CPL, M^eff_s(k_A=0)<0. Once irradiation of CPL is applied, two spin sectors exhibit different responses, and the system shows 𝒯-symmetry broken TAI when M^eff_↑,↓<0. As the light intensity exceeds a critical value k_A^c, the energy gap of one spin sector first closes and then reopens M^eff_↓>0, while the other spin sector still possesses the nontrivial topology M^eff_↑<0 [Fig. <ref> (g)]. In this case, the system evolves into the QAH phase. The spin-polarized topological phases can also be characterized by the spin Hall conductivity σ^spin_xy and charge Hall conductivity σ^c_xy [Fig. <ref>(h)]. These results conceptually demonstrate the possibility of realizing QAH effect in TAI using CPL.
§.§ Quadrupole topological insulator with periodic driving and disorder
Recently, the concept of topological phase of matter has been extended to higher-order topological phases <cit.>.
Among various higher-order topological phases, the quadrupole topological insulator (QTI) <cit.> associated with a quantized quadrupole moment is of particular interest, which accommodates topologically protected corner states. It has been believed that spatial symmetries (such as mirror symmetries and/or four-fold rotation symmetry) and internal symmetries (such as chiral symmetry, 𝒯-symmetry and particle-hole symmetry) are crucial ingredients to design and realize QTIs. The presence of disorder explicitly breaks crystal symmetries. Therefore, the new phenomena induced by disorder in the higher-order topological states have attracted wide attention <cit.>.
In the following, we review that an exotic QTI created by the intertwined periodic driving and disorder emerges from a topologically trivial band structure. This intriguing QTI possesses a quantized quadrupole moment only protected by particle-hole symmetry.
Starting with the Benalcazar-Bernevig-Hughes (BBH) model, a paradigmatic model of QTIs, the Hamiltonian can be written as
H_q(𝐤) = λsin(k_y)τ_2σ_1 +λsin(k_x)τ_2σ_3
[γ + λcos(k_x)]τ_1σ_0 + [γ + λcos(k_y)]τ_2σ_2.
The schematic illustration of the lattice structure is shown in Fig. <ref> (a).
The topological phase transition is determined by the ratio of γ / λ. In the static and clean limit, the BBH model Eq. (<ref>) describes a trivial insulator when γ / λ>1 or a QTI when γ / λ<1 . The phase boundary is at γ / λ = 1. Under the irradiation of CPL, electronic structures of the system are effectively modified by the virtual photon absorption processes, which can be expressed as an effective Floquet-Bloch Hamiltonian:
H^eff(𝐤) = H_q(𝐤) + H'(𝐤),
where the second term H'(𝐤) induced by CPL gives an important modification to the original BBH model <cit.>. It is noted that chiral symmetry and 𝒯-symmetry preserved by the static BBH Hamiltonian H_q(𝐤) are both broken under the CPL. Nevertheless, the combination of these two symmetries (i.e., particle-hole symmetry) is preserved.
Particle-hole symmetry is critical to the quantization of quadrupole moment in the even presence of both disorder and periodic-driving.
The quadrupole moment defined in real space Q_xy <cit.> can characterize the QTI phase in disordered systems. When the periodic driving and disorder are simultaneously present, all crystalline symmetries and chiral symmetry are destroyed. However, the preserved particle-hole symmetry can protect the quantization of quadrupole moment <cit.>. The established topological invariant Q_xy allows one to investigate topological phase transitions in the presence of periodic driving and disorder.
Figure <ref>(b) depicts the phase diagram of the quadrupole moment Q_xy on the W-k_A parameter plane. Significantly, there is an explicit area of the QTI phase, which is created by the joint effort of periodic driving and disorder. This QTI phase can neither be created by the driving field without disorder, nor disorder in the absence of driving field. For example, in Fig. <ref>(b), the QTI phase cannot be induced by tuning the field strength k_A in the weak disorder regime (W∼1 ) or by increasing the disorder strength in the presence of weak driving field (k_A ∼1).
The energy eigenstates by directly diagonalizing the tight-binding Hamiltonian with open boundaries can fully illustrate the characteristic features of the emergent QTI phase. As evident in Fig. <ref>(c), four in-gap modes at E=0 emerge as a function of k_A, implying that the presence of topologically nontrivial QTI phase. The zero-energy modes in bulk gap correspond to corner states. Furthermore, the topological phase transitions induced by the interplay of periodic driving and disorder can be understood by a simple picture based on the effective medium theory. The hopping amplitudes renormalized by periodic driving and disorder (γ_x,y→ t^d_x,y) are displayed in Fig. <ref>(d). When t^d_x<1 and t^d_y <1 simultaneously, the disorder-induced Floquet QTI phase is created. This picture agrees with the numerical computations. The intriguing QTI phase, protected only by particle-hole symmetry, which necessitates the simultaneous presence of disorder and periodic driving further enriches the symmetry-protected mechanism of higher-order topology.
§ FLOQUET ENGINEERING OF TOPOLOGICAL STATES IN REALISTIC MATERIALS
In this section, we review the progress of light-driven QAH states and controllable Weyl fermions proposed using first-principles calculations combined with Floquet theory.
§.§ The realization of light-driven QAH and VQAH states
The light-driven QAH states can be realized in 2D nonmagnetic MX_2/WTe_2 (M=Mo, W; X= S, Se) transitional metal dichalcogenides (TMDs) heterobilayers under the irradiation of CPL. Considering the locking of spin to valley in TMDs, the valley-polarized quantum anomalous Hall (VQAH) state with one spin- and valley-resolved chiral edge channel in TMDs heterobilayers under light irradiation behaves as a perfect topological spin-valley filter [Fig. <ref>(a)]. Figures. <ref>(b)-<ref>(e) illustrate the evolution of spin-resolved band structures around the K and K' valleys under CPL irradiation. We can find that the bands of spin-up states around the K valley have been more drastically modified than those of spin-down states around the K' valley. More importantly, with increasing light intensity, the band gap of spin-up states first closes and then reopens; that is, only spin-down states preserve the inverted band topology, resulting in a valley quantum spin Hall (VQSH) to VQAH topological phase transition.
By integrating the Berry curvature as shown in Figs. <ref>(f) and <ref>(g), one can obtain 𝒞_K=1 and 𝒞_K'=-1, and the valley Chern number is 𝒞_v= 2. For the VQAH state, the Berry curvature Ω_z only distributes and diverges near K' valleys, giving 𝒞_K=0 and 𝒞_K'=-1, and then the Chern number is 𝒞=-1. The topological phase with a specific topological invariant gives rise to uniquely nontrivial edge states.
Without light irradiation, the 𝒯-invariant VQSH state shows that two opposite chiral edge states with Kramers degeneracy are visible at the K and K' valleys [Fig. <ref>(h)]. With increasing light intensity to 0.045 Å^-1, the 𝒯-broken VQSH state removes the Kramers degeneracy and exhibits different inverted band gaps at the K and K' valleys, but the chiral edge states are also visibly present [Fig. <ref>(i)], confirming its nontrivial feature. As shown in Fig. <ref>(j), the VQAH state possesses one chiral edge state connecting the valence and conduction bands around the K' valley, while bands around the K valley exhibit the topologically trivial NI state. Furthermore, depending on light helicities, this CPL can selectively switch the states between two valleys and spin, providing a reliable scheme to realize an optically switchable topological spin-valley filter <cit.>.
The Floquet QAH states can be further obtained from topologically trivial semiconductors. As illustrated in Fig. <ref>(a), under irradiation of CPL, periodic driving gives rise to Floquet-Bloch bands, and then two certain bands [i.e., labeled as E_c^F and E_v^F in Fig. <ref>(a)] move close to the Fermi level via the optical Stark effect <cit.>.
This proposal can be implemented in the 2D semiconductors MSi_2Z_4 (M = Mo, W, V; Z = N, P, As) family materials.
This family of materials, including magnetic and nonmagnetic members, host excellent stability, and especially MoSi_2N_4 and WSi_2N_4 were successfully synthesized in experiments <cit.>. As a representative example, the band structures of VSi_2N_4, with the valence band and conduction band respectively contributed by d_xy & d_x^2-y^2 and d_z^2 orbitals of V atoms, depict a trivial semiconducting feature with the valley degeneracy in Fig. <ref>(b). Under light irradiation, in addition to the equilibrium bands (black solid lines), one can find that the Floquet-Bloch bands in Fig. <ref>(c), that are created by absorption (red dashed lines) or emission (blue dashed lines) of a photon, are present.
Interestingly, the band gaps near the K and K' points indicate that band gap at the K point closes and reopens twice [Fig. <ref>(d)]. Due to the threefold rotational (C_3) symmetry, a triangular distortion of the Fermi surface around the K points would be present [Fig. <ref>(e)], which is known as trigonal warping <cit.>.
The presence of trigonal warping would like to strongly enrich topological phases. As shown in Fig. <ref>(f), the phase diagram characterized by 𝒞_K (K') as functions of ħω and eA/ħ indicates that there are five distinct topological phases, such as regime I: 𝒞_K = 0 and 𝒞_K'=0, regime II: 𝒞_K = +1 and 𝒞_K'=0, regime III: 𝒞_K = -3 and 𝒞_K'=0, regime IV: 𝒞_K = -3 and 𝒞_K'=-1, and regime V: 𝒞_K = 0 and 𝒞_K'=-1. Except the topologically trivial regime I, other four regimes are all related to the topologically nontrivial VQAH state. The Berry curvature distributions for regimes III and IV are plotted in Figs. <ref>(g) and <ref>(h). The nonzero Berry curvature Ω_z(𝐤) diverges near the K and/or K' points. In particular, the Ω_z(𝐤) near the K point exhibits the C_3-symmetry as shown in the insets of Figs. <ref>(g) and <ref>(h), further confirming nontrivial band topology associated with trigonal warping. The Floquet VQAH states with specific first Chern number 𝒞 and valley-resolved Chern number 𝒞_K (K') correspond to the valley-dependent chiral edge channels [Fig. <ref>(i)] and quantized Hall conductance σ_xy [Fig. <ref>(j)], characterizing the global band topology of VQAH states.
Besides, the photoinduced high-Chern-number QAH states can also occur in a higher-order topological insulators. As earlier mentioned, the light-driven BBH model can capture directly the influence of light field on the QTI. Under the irradiation of CPL, the energy spectrum of BBH model shows that both one-dimensional edge states (colored blue) and zero-dimensional corner states (colored red) inside the gap [Fig. <ref>(a)], indicating the coexistence of QTI and Chern insulator phases. As shown in Fig. <ref>(b), the quadruple moment Q_xy can always be quantized to 1/2 with the increase of light intensity, indicating that the irradiation of CPL does not destroy the higher-order topology; meanwhile, the Chern number 𝒞 accompanies with a transition from 0 to 1. Further, the phase with 𝒞=2 or 𝒞=3 and Q_xy=1/2 are present at higher frequencies <cit.>.
These photoinduced high-Chern-number QAH states can be realized in experimentally synthesized 2D graphdiyne <cit.>. The crystal structure of graphdiyne constructed from sp- and sp^2-hybridized carbon atoms is shown in Fig. <ref>(c). The calculated band structures around Γ under various light intensity and photon energy of CPL are shown in Fig. <ref>(d).
Similar to the case of BBH model, the multiple band inversion occurs as expected, leading to the Chern number manipulated continuously from trivial state 𝒞=0 to nontrivial QAH state with 𝒞=3.
Figure <ref>(e) illustrates the edge states of graphdiyne under the irradiation of CPL along zigzag direction. One can see that the number of the chiral edge states with different light intensity and photon energy, further supporting the fact that one can obtain the QAH states by irradiating CPL on a 2D higher-order topological insulators. Moreover, by manipulating the light intensity and photon energy, one can realize the high-Chern-number QAH states up to 𝒞=4 for graphdiyne. The phase diagram can completely summarize the parameter regimes of different phases and gain a deep insight into the topological phase transition. As shown in Fig. <ref>(f), one can find that there are five distinct phase regimes corresponding to the continuously changed Chern number (ranging from 0 to 4).
§.§ Light-manipulated Weyl nodes in topological semimetallic materials
Distinct from gapped topological phases, such as Chern insulators, topological semimetals possessing the gapless nodal points (or nodal lines) near the Fermi level are particularly relevant to symmetries.
The nontrivial band topology of topological semimetals often leads to attractive phenomena, such as ultrahigh carrier mobility <cit.>, half-integer quantum Hall effects <cit.>, large diamagnetism <cit.>, and electromagnetic duality <cit.>, and thus considered to have a wide range of applications in future devices and technologies.
Benefiting the efforts donated to uncover mappings between symmetries and band topology <cit.>, it is now possible to actively manipulate the transition between different topological states and thereby design desired topological semimetals under the irradiation of light fields.
For instance, it has been shown that Floquet WSMs can be generated in materials with QSH state, Dirac fermion, triple fermion, and nodal-line fermion subjected to periodic driving light fields <cit.>.
Here, we briefly review the material proposal of light-induced WSM from triple fermion in TiO, and nodal-line fermion in carbon allotrope bct-C_16.
The TiO crystallizing in tungsten carbide type (WC-type) structure with a space group P6m2 (D_3h, No. 187) was demonstrated to be an ideal candidate with triple fermions near the Fermi level.
As schematically illustrated in Fig. <ref>(a), the triply degenerate nodal points in WC-type TiO without light irradiation are protected by the C_3-symmetry, vertical mirror symmetry plus 𝒯-symmetry σ_v⊕𝒯. Under time-periodic and space-homogeneous CPL, the light irradiation can break the symmetry of σ_v⊕𝒯 and thereby lead to the triply degenerate nodal points splitting to twofold degenerate Weyl nodes.
The orbital-resolved band structures along high-symmetry paths are shown in Fig. <ref>(b).
There is a band crossing point along the high-symmetry Γ-A path, which is mainly contributed by the e_g orbital (d_xy and d_x^2-y^2) and the d_z^2 orbital.
In fact, the enlarged view of the bands along Γ-A exhibits three sets of bands with distinct band degeneracy [Fig. <ref>(c)], i.e., the non-degenerate Λ _4 and Λ _5 bands, and the doubly degenerate Λ _6 band. The Λ _4(Λ _5) band crosses with the doubly degenerate Λ _6 band, forming the triply degenerate nodal points that are protected by C_3 and σ_v⊕𝒯.
The light irradiation can lift this doubly degenerate band, as shown in Figs. <ref>(d) and <ref>(e), forming two non-degenerate bands that are respectively featured by two C_3 eigenvalues of e^iπ/3 and e^-iπ/3. Consequently, the triply degenerate nodal points along Γ-A are absent. These four Weyl points located along the rotation-invariant high-symmetry line Γ-A are constrained by the rotation symmetry C_3, which indicates that CPL irradiation breaks σ_v⊕𝒯 but preserve C_3. The obtained LDOS projected on the semi-infinite (010) surface are plotted in Figs. <ref>(f) and <ref>(g), and one can find the characteristic surface states terminated at projections of bulk Weyl nodes.
The application of lattice strain and its coupling of CPL would offer an effective way to manipulate the electronic and topological properties of WC-type TiO. Figure <ref>(h) shows the surface states connecting the projections of bulk Weyl nodes are clearly visible under 4% tensile strain. With various intensities of CPL, the strained WC-type TiO exhibits distinct Weyl semimetallic phases with different numbers of Weyl nodes and Fermi arcs. To be specific, as the light intensity increases, the photon-dressed band structures of strained WC-type TiO show that the two lowest conduction bands and the two highest valence bands simultaneously move away from the Fermi level. Consequently, the Weyl nodes W_1 and W_2 come together and then annihilate at a specific light intensity as described in Fig. <ref>(i).
The carbon allotrope bct-C_16 crystallizes in a body-centered tetragonal (bct) structure with space group I4_1/amd [Fig. <ref>(a)], which can be obtained from the famous T-carbon through a temperature-driven structural transition <cit.>. Due to the extremely tiny SOC of the carbon element, the interplay between the SOC and light irradiation can be ignored. Therefore, it can be considered as an ideal platform to study the photon-dressed topological states. As shown Fig. <ref>(b), the bct-C_16 is a NLSMs protected by the 𝒫𝒯 symmetry.
The Dirac nodal-line is located at the the mirror reflection invariant k_x-k_y plane with k_z = 0. Under a periodic field of a linearly polarized laser (LPL), the NLSM phase is transitioned to a mixed-WSM phase with two pairs of tunable Weyl nodes.
As shown in Fig. <ref>(c), one can see that the light irradiation obviously influences the electronic band structures of bct-C_16. The previous band crossings in the Γ-X and Γ-M directions are both gapped, indicating that the nodal-line fermions in bct-C_16 disappear. With increasing the light intensity, the band gaps are further enlarged [the inset of Fig. <ref>(c)]. The band profiles around one pair of Weyl nodes (i.e., W_1^+ and W_1^-) evolve with increasing the amplitude of LPL, as shown in Figs. <ref>(d)-(l). The other pair of Weyl nodes shows the same behaviors with respect to the 𝒯-symmetry. The band dispersion around the W_1^- with a light intensity eA_z /ħ of 0.03, 0.059, and 0.066 Å^-1 are illustrated in Figs. <ref>(d), <ref>(e), and <ref>(f), respectively. It is found that the right-handed Weyl node W_1^- keeps to be type-I though it becomes more tilted with increasing the light intensity. On the contrary, light-dependent change around W_1^+ is more remarkable. When the light intensity eA_z /ħ increases from 0.03 to 0.066 Å^-1, the left-handed Weyl node W_1^+ undergoes a transition from type-I [Fig. <ref>(g)] to type-II [Fig. <ref>(i)]. In this transition process, the critical type-III Weyl node is present between type-I and type-II states with eA_z /ħ = 0.059 Å^-1 [Fig. <ref>(h)]. The 3D plots of band profiles around W_1^+ with eA_z /ħ = 0.03, 0.059, and 0.066 Å^-1 are respectively shown in Figs. <ref>(j), <ref>(k), and <ref>(l), which are consistent with topologically nontrivial features of type-I, type-II, and type-III Weyl fermions. The results demonstrate that the unconventional Weyl pairs composed of distinct types of Weyl nodes are realized in bct-C_16.
In addition to the material mentioned above, other realistic material systems with controllable Floquet topological states have also been shown to form the analogous proposals, such as the graphene <cit.>, FeSe <cit.>, black phosphorus <cit.>, Na_3Bi <cit.>, MnBi_2Te_4 films <cit.> and so on. These experimentally synthesized materials offer excellent platforms to control the electronic properties for achieving the new and desired topological states.
§ SUMMARY AND PERSPECTIVE
Last but not least, we have provided a brief review of recent progress in theoretical investigations of Floquet engineering topological states in condensed-matter systems under light irradiation. The study of various Floquet topological states, such as the light control of type-I, type-II, and type-III Weyl fermions and their topological phase transitions, photoinduced QAH effects with a tunable Chern number, higher-order topological insulators arising from periodic driving and disorder, paves a fascinating path for realizing desired topological states with high tunability. The strong dependence of Floquet topological phases on light-induced symmetry breaking provides a valuable platform for investigating the interplay between symmetry and topology, as well as for generating exotic non-equilibrium topological phases unattainable in equilibrium conditions. Moreover, we show that the combination of first-principles calculations and Floquet theorem offers a reliable avenue to map momentum- and spin- resolved Floquet-Bloch bands in whole Brillouin zone of solids, and most of predicted material candidates are synthesized or well studied in experimental level, facilitating the measurements of Floquet engineering topological states and their device design in experiments.
Besides, although we point out that the Floquet theorem combining with effective model as well as first-principles calculations can effectively map theoretical investigations of Floquet engineering topological states to experiments, but the complex and entangled band manifolds on ultrafast timescales are still not clear. Especially, the thorough understanding of Floquet topological states in periodically driven systems, such as interplay between non-equilibrium topological states and electron–electron corrections, is a long-term challenge. We believe that further study of Floquet engineering topological states will uncover even more exotic phenomena yet to be imagined, establishing this field as a promising area of study with attractive physical phenomena in the condensed-matter community.
Acknowledgements
Not applicable.
Funding
This work was supported by the National Natural Science Foundation of China (NSFC, Grants No. 12204074, No. 12222402, No. 92365101, No. 12347101, No. 12304195, No. 12304191, and No. 12074108) and the Natural Science Foundation of Chongqing (Grants No. 2023NSCQ-JQX0024, and No. CSTB2022NSCQ-MSX0568).
Availability of data and materials
Not applicable.
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable
Competing interests
The authors declare no competing interests.
Author contributions
RW and DHX supervised the work. FZ and RC contribute to the work equally. All authors read and edited the full manuscript. Introduction (RW and DHX); basic method of Floquet engineering (RW); light-driven topological semimetallic states from effective models (RC and ZW); Topological states induced by the interplay of periodic driving and disorder (ZN); Floquet engineering of topological states in realistic materials (FZ and DSM); summary (RW). Overview of the review (RW and DHX). All authors have approved the manuscript.
184
#1ISBN #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<https://doi.org/#1>et al.#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<><#>1#1#1#1#1#1#1#1#1#1#1#1#1#1PreBibitemsHook
[Hasan and Kane2010]RevModPhys.82.3045
Hasan, M.Z.,
Kane, C.L.:
Colloquium: Topological insulators.
Rev. Mod. Phys.
82,
3045–3067
(2010)
10.1103/RevModPhys.82.3045
[Qi and Zhang2011]RevModPhys.83.1057
Qi, X.-L.,
Zhang, S.-C.:
Topological insulators and superconductors.
Rev. Mod. Phys.
83,
1057–1110
(2011)
10.1103/RevModPhys.83.1057
[Armitage et al.2018]RevModPhys.90.015001
Armitage, N.P.,
Mele, E.J.,
Vishwanath, A.:
Weyl and Dirac semimetals in three-dimensional solids.
Rev. Mod. Phys.
90,
015001
(2018)
10.1103/RevModPhys.90.015001
[Bansil et al.2016]RevModPhys.88.021004
Bansil, A.,
Lin, H.,
Das, T.:
Colloquium: Topological band theory.
Rev. Mod. Phys.
88,
021004
(2016)
10.1103/RevModPhys.88.021004
[Lv et al.2021]RevModPhys.93.025002
Lv, B.Q.,
Qian, T.,
Ding, H.:
Experimental perspective on three-dimensional topological semimetals.
Rev. Mod. Phys.
93,
025002
(2021)
10.1103/RevModPhys.93.025002
[Wan et al.2011]PhysRevB.83.205101
Wan, X.,
Turner, A.M.,
Vishwanath, A.,
Savrasov, S.Y.:
Topological semimetal and Fermi-arc surface states in the electronic structure of pyrochlore iridates.
Phys. Rev. B
83,
205101
(2011)
10.1103/PhysRevB.83.205101
[Chiu et al.2016]RevModPhys.88.035005
Chiu, C.-K.,
Teo, J.C.Y.,
Schnyder, A.P.,
Ryu, S.:
Classification of topological quantum matter with symmetries.
Rev. Mod. Phys.
88,
035005
(2016)
10.1103/RevModPhys.88.035005
[Altland and Zirnbauer1997]PhysRevB.55.1142
Altland, A.,
Zirnbauer, M.R.:
Nonstandard symmetry classes in mesoscopic normal-superconducting hybrid structures.
Phys. Rev. B
55,
1142–1161
(1997)
10.1103/PhysRevB.55.1142
[Kitaev2009]1.3149495
Kitaev, A.:
Periodic table for topological insulators and superconductors.
AIP Conf. Proc.
1134(1),
22–30
(2009)
10.1063/1.3149495
[Schnyder et al.2008]PhysRevB.78.195125
Schnyder, A.P.,
Ryu, S.,
Furusaki, A.,
Ludwig, A.W.W.:
Classification of topological insulators and superconductors in three spatial dimensions.
Phys. Rev. B
78,
195125
(2008)
10.1103/PhysRevB.78.195125
[Wieder et al.2022]2022symme
Wieder, B.J.,
Bradlyn, B.,
Cano, J.,
Wang, Z.,
Vergniory, M.G.,
Elcoro, L.,
Soluyanov, A.A.,
Felser, C.,
Neupert, T.,
Regnault, N.,
Bernevig, B.A.:
Topological materials discovery from crystal symmetry.
Nat. Rev. Mater.
7(3),
196–216
(2022)
10.1038/s41578-021-00380-2
[Vergniory et al.2019]2019Vergniory
Vergniory, M.G.,
Elcoro, L.,
Felser, C.,
Regnault, N.,
Bernevig, B.A.,
Wang, Z.:
A complete catalogue of high-quality topological materials.
Nature
566(7745),
480–485
(2019)
10.1038/s41586-019-0954-4
[Zhang et al.2019]2019Catalogue
Zhang, T.,
Jiang, Y.,
Song, Z.,
Huang, H.,
He, Y.,
Fang, Z.,
Weng, H.,
Fang, C.:
Catalogue of topological electronic materials.
Nature
566(7745),
475
(2019)
10.1038/s41586-019-0944-6
[Tang et al.2019]2019Comprehensive
Tang, F.,
Po, H.C.,
Vishwanath, A.,
Wan, X.:
Comprehensive search for topological materials using symmetry indicators.
Nature
566(7745),
486
(2019)
10.1038/s41586-019-0937-5
[Vergniory et al.2022]science.abg9094
Vergniory, M.G.,
Wieder, B.J.,
Elcoro, L.,
Parkin, S.S.P.,
Felser, C.,
Bernevig, B.A.,
Regnault, N.:
All topological bands of all nonmagnetic stoichiometric materials.
Science
376(6595),
9094
(2022)
10.1126/science.abg9094
[Xu et al.2020]Xu2020
Xu, Y.,
Elcoro, L.,
Song, Z.-D.,
Wieder, B.J.,
Vergniory, M.G.,
Regnault, N.,
Chen, Y.,
Felser, C.,
Bernevig, B.A.:
High-throughput calculations of magnetic topological materials.
Nature
586,
702
(2020)
10.1038/s41586-020-2837-0
[Regnault et al.2022]2022Regnault
Regnault, N.,
Xu, Y.,
Li, M.-R.,
Ma, D.-S.,
Jovanovic, M.,
Yazdani, A.,
Parkin, S.S.P.,
Felser, C.,
Schoop, L.M.,
Ong, N.P.,
Cava, R.J.,
Elcoro, L.,
Song, Z.-D.,
Bernevig, B.A.:
Catalogue of flat-band stoichiometric materials.
Nature
603(7903),
824
(2022)
10.1038/s41586-022-04519-1
[Bernevig et al.2022]2022Progress
Bernevig, B.A.,
Felser, C.,
Beidenkopf, H.:
Progress and prospects in magnetic topological materials.
Nature
603,
41
(2022)
10.1038/s41586-021-04105-x
[Xu et al.2024]science.adf8458
Xu, Y.,
Vergniory, M.G.,
Ma, D.-S.,
Mañes, J.L.,
Song, Z.-D.,
Bernevig, B.A.,
Regnault, N.,
Elcoro, L.:
Catalog of topological phonon materials.
Science
384(6696),
8458
(2024)
10.1126/science.adf8458
[Liu et al.2020]adfm.201904784
Liu, Y.,
Chen, X.,
Xu, Y.:
Topological phononics: From fundamental models to real materials.
Adv. Funct. Mater.
30(8),
1904784
(2020)
10.1002/adfm.201904784
[Li et al.2021]2021LiComput
Li, J.,
Liu, J.,
Baronett, S.A.,
Liu, M.,
Wang, L.,
Li, R.,
Chen, Y.,
Li, D.,
Zhu, Q.,
Chen, X.-Q.:
Computation and data driven discovery of topological phononic materials.
Nat. Commun.
12(1)
(2021)
10.1038/s41467-021-21293-2
[Zhang et al.2018]PhysRevLett.120.016401
Zhang, T.,
Song, Z.,
Alexandradinata, A.,
Weng, H.,
Fang, C.,
Lu, L.,
Fang, Z.:
Double-Weyl phonons in transition-metal monosilicides.
Phys. Rev. Lett.
120,
016401
(2018)
10.1103/PhysRevLett.120.016401
[Xia et al.2019]PhysRevLett.123.065501
Xia, B.W.,
Wang, R.,
Chen, Z.J.,
Zhao, Y.J.,
Xu, H.:
Symmetry-protected ideal Type-II Weyl phonons in CdTe.
Phys. Rev. Lett.
123,
065501
(2019)
10.1103/PhysRevLett.123.065501
[Wang et al.2020]PhysRevLett.124.105303
Wang, R.,
Xia, B.W.,
Chen, Z.J.,
Zheng, B.B.,
Zhao, Y.J.,
Xu, H.:
Symmetry-protected topological triangular Weyl complex.
Phys. Rev. Lett.
124,
105303
(2020)
10.1103/PhysRevLett.124.105303
[Chen et al.2021]PhysRevLett.126.185301
Chen, Z.J.,
Wang, R.,
Xia, B.W.,
Zheng, B.B.,
Jin, Y.J.,
Zhao, Y.-J.,
Xu, H.:
Three-dimensional Dirac phonons with inversion symmetry.
Phys. Rev. Lett.
126,
185301
(2021)
10.1103/PhysRevLett.126.185301
[Zou et al.2020]1093nwaa169
Zou, J.,
Xie, Q.,
Song, Z.,
Xu, G.:
New types of topological superconductors under local magnetic symmetries.
Natil. Sci. Rev.
8(5),
169
(2020)
10.1093/nsr/nwaa169
[Skurativska et al.2020]PhysRevResearch.2.013064
Skurativska, A.,
Neupert, T.,
Fischer, M.H.:
Atomic limit and inversion-symmetry indicators for topological superconductors.
Phys. Rev. Res.
2,
013064
(2020)
10.1103/PhysRevResearch.2.013064
[Shiozaki2019]2019Variants
Shiozaki, K.:
Variants of the symmetry-based indicator
(2019).
<https://doi.org/10.48550/arXiv.1907.13632>
[Ono et al.2020]sciadv.aaz8367
Ono, S.,
Po, H.C.,
Watanabe, H.:
Refined symmetry indicators for topological superconductors in all space groups.
Sci. Adv.
6(18),
8367
(2020)
10.1126/sciadv.aaz8367
https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/sciadv.aaz8367https://www.science.org/doi/pdf/10.1126/sciadv.aaz8367
[Geier et al.2020]PhysRevB.101.245128
Geier, M.,
Brouwer, P.W.,
Trifunovic, L.:
Symmetry-based indicators for topological Bogoliubov-de Gennes Hamiltonians.
Phys. Rev. B
101,
245128
(2020)
10.1103/PhysRevB.101.245128
[Ono et al.2021]PhysRevResearch.3.023086
Ono, S.,
Po, H.C.,
Shiozaki, K.:
ℤ_2-enriched symmetry indicators for topological superconductors in the 1651 magnetic space groups.
Phys. Rev. Res.
3,
023086
(2021)
10.1103/PhysRevResearch.3.023086
[Ono and Shiozaki2022]PhysRevX.12.011021
Ono, S.,
Shiozaki, K.:
Symmetry-based approach to superconducting nodes: Unification of compatibility conditions and gapless point classifications.
Phys. Rev. X
12,
011021
(2022)
10.1103/PhysRevX.12.011021
[Huang and Hsu2021]PhysRevResearch.3.013243
Huang, S.-J.,
Hsu, Y.-T.:
Faithful derivation of symmetry indicators: A case study for topological superconductors with time-reversal and inversion symmetries.
Phys. Rev. Res.
3,
013243
(2021)
10.1103/PhysRevResearch.3.013243
[Tang et al.2022]PhysRevLett.129.027001
Tang, F.,
Ono, S.,
Wan, X.,
Watanabe, H.:
High-throughput investigations of topological and nodal superconductors.
Phys. Rev. Lett.
129,
027001
(2022)
10.1103/PhysRevLett.129.027001
[Watanabe et al.2018]sciadv.aat8685
Watanabe, H.,
Po, H.C.,
Vishwanath, A.:
Structure and topology of band structures in the 1651 magnetic space groups.
Sci. Adv.
4(8),
8685
(2018)
10.1126/sciadv.aat8685
[Tang and Wan2021]PhysRevB.104.085137
Tang, F.,
Wan, X.:
Exhaustive construction of effective models in 1651 magnetic space groups.
Phys. Rev. B
104,
085137
(2021)
10.1103/PhysRevB.104.085137
[Tang and Wan2022]PhysRevB.105.155156
Tang, F.,
Wan, X.:
Complete classification of band nodal structures and massless excitations.
Phys. Rev. B
105,
155156
(2022)
10.1103/PhysRevB.105.155156
[Bradlyn et al.2016]bradlyn2016beyond
Bradlyn, B.,
Cano, J.,
Wang, Z.,
Vergniory, M.,
Felser, C.,
Cava, R.J.,
Bernevig, B.A.:
Beyond Dirac and Weyl fermions: Unconventional quasiparticles in conventional crystals.
Science
353(6299),
5037
(2016)
10.1126/science.aaf5037
[Yu et al.2022]YU2022375
Yu, Z.-M.,
Zhang, Z.,
Liu, G.-B.,
Wu, W.,
Li, X.-P.,
Zhang, R.-W.,
Yang, S.A.,
Yao, Y.:
Encyclopedia of emergent particles in three-dimensional crystals.
Sci. Bull.
67(4),
375–380
(2022)
10.1016/j.scib.2021.10.023
[Bao et al.2022]Light2022
Bao, C.,
Tang, P.,
Sun, D.,
Zhou, S.:
Light-induced emergent phenomena in 2D materials and topological materials.
Nat. Rev. Phys.
4(1),
33
(2022)
10.1038/s42254-021-00388-1
[Oka and Aoki2009]Oka1
Oka, T.,
Aoki, H.:
Photovoltaic Hall effect in graphene.
Phys. Rev. B
79,
081406
(2009)
10.1103/PhysRevB.79.081406
[Kitagawa et al.2011]Oka2
Kitagawa, T.,
Oka, T.,
Brataas, A.,
Fu, L.,
Demler, E.:
Transport properties of nonequilibrium systems under the application of light: Photoinduced quantum Hall insulators without Landau levels.
Phys. Rev. B
84,
235108
(2011)
10.1103/PhysRevB.84.235108
[Lindner et al.2011]lindner2011floquet
Lindner, N.H.,
Refael, G.,
Galitski, V.:
Floquet topological insulator in semiconductor quantum wells.
Nat. Phys.
7(6),
490
(2011)
10.1038/nphys1926
[Oka and Kitamura2019]annurev013423
Oka, T.,
Kitamura, S.:
Floquet engineering of quantum materials.
Annu. Rev. Condens. Matter Phys.
10,
387–408
(2019)
10.1146/annurev-conmatphys-031218-013423
[Rudner and Lindner2020]2020nonequilibrium
Rudner, M.S.,
Lindner, N.H.:
Band structure engineering and non-equilibrium dynamics in Floquet topological insulators.
Nat. Rev. Phys.
2(5),
229–244
(2020)
10.1038/s42254-020-0170-z
[de la Torre et al.2021]RevModPhys.93.041002
Torre, A.,
Kennes, D.M.,
Claassen, M.,
Gerber, S.,
McIver, J.W.,
Sentef, M.A.:
Colloquium: Nonthermal pathways to ultrafast control in quantum materials.
Rev. Mod. Phys.
93,
041002
(2021)
10.1103/RevModPhys.93.041002
[Shirley1965]PhysRev.138.B979
Shirley, J.H.:
Solution of the Schrödinger equation with a Hamiltonian periodic in time.
Phys. Rev.
138,
979–987
(1965)
10.1103/PhysRev.138.B979
[Dunlap and Kenkre1986]PhysRevB.34.3625
Dunlap, D.H.,
Kenkre, V.M.:
Dynamic localization of a charged particle moving under the influence of an electric field.
Phys. Rev. B
34,
3625–3633
(1986)
10.1103/PhysRevB.34.3625
[Sambe1973]PhysRevA.7.2203
Sambe, H.:
Steady states and quasienergies of a quantum-mechanical system in an oscillating field.
Phys. Rev. A
7,
2203–2213
(1973)
10.1103/PhysRevA.7.2203
[Gesztesy and Mitter1981]Gesztesy1981JPA
Gesztesy, F.,
Mitter, H.:
A note on quasi-periodic states.
J. Phys. A: Math. Gen.
14(4),
79–83
(1981)
10.1088/0305-4470/14/4/003
[Cayssol et al.2013]pssr.201206451
Cayssol, J.,
Dóra, B.,
Simon, F.,
Moessner, R.:
Floquet topological insulators.
Phys. Status Solidi RRL
7(1-2),
101–108
(2013)
10.1002/pssr.201206451
[Farrell and Pereg-Barnea2016]PhysRevB.93.045121
Farrell, A.,
Pereg-Barnea, T.:
Edge-state transport in Floquet topological insulators.
Phys. Rev. B
93,
045121
(2016)
10.1103/PhysRevB.93.045121
[Potter et al.2016]PhysRevX.6.041001
Potter, A.C.,
Morimoto, T.,
Vishwanath, A.:
Classification of interacting topological Floquet phases in one dimension.
Phys. Rev. X
6,
041001
(2016)
10.1103/PhysRevX.6.041001
[Katan and Podolsky2013]PhysRevLett.110.016802
Katan, Y.T.,
Podolsky, D.:
Modulated Floquet topological insulators.
Phys. Rev. Lett.
110,
016802
(2013)
10.1103/PhysRevLett.110.016802
[Sie et al.2015]2015Valley
Sie, E.J.,
Mciver, J.W.,
Lee, Y.H.,
Fu, L.,
Kong, J.,
Gedik, N.:
Valley-selective optical Stark effect in monolayer WS_2.
Nat. Mater.
14(6329),
290–294
(2015)
10.1038/nmat4156
[LaMountain et al.2018]PhysRevB.97.045307
LaMountain, T.,
Bergeron, H.,
Balla, I.,
Stanev, T.K.,
Hersam, M.C.,
Stern, N.P.:
Valley-selective optical Stark effect probed by Kerr rotation.
Phys. Rev. B
97,
045307
(2018)
10.1103/PhysRevB.97.045307
[De Giovannini et al.2016]nanolett.6b04419
De Giovannini, U.,
Hübener, H.,
Rubio, A.:
Monitoring electron-photon dressing in WSe_2.
Nano Lett.
16(12),
7993–7998
(2016)
10.1021/acs.nanolett.6b04419
[Sie et al.2017]science.aal2241
Sie, E.J.,
Lui, C.H.,
Lee, Y.-H.,
Fu, L.,
Kong, J.,
Gedik, N.:
Large, valley-exclusive Bloch-Siegert shift in monolayer WS_2.
Science
355(6329),
1066–1069
(2017)
10.1126/science.aal2241
[Inoue and Tanaka2010]Inoue
Inoue, J.-i.,
Tanaka, A.:
Photoinduced transition between conventional and topological insulators in two-dimensional electronic systems.
Phys. Rev. Lett.
105,
017401
(2010)
10.1103/PhysRevLett.105.017401
[Ezawa2013]Ezawa
Ezawa, M.:
Photoinduced topological phase transition and a single Dirac-cone state in Silicene.
Phys. Rev. Lett.
110,
026603
(2013)
10.1103/PhysRevLett.110.026603
[Liu et al.2009]Mimp
Liu, Q.,
Liu, C.-X.,
Xu, C.,
Qi, X.-L.,
Zhang, S.-C.:
Magnetic impurities on the surface of a topological insulator.
Phys. Rev. Lett.
102,
156603
(2009)
10.1103/PhysRevLett.102.156603
[Wang et al.2018]PhysRevLett.120.156406
Wang, Z.F.,
Liu, Z.,
Yang, J.,
Liu, F.:
Light-induced type-II band inversion and quantum anomalous Hall state in monolayer FeSe.
Phys. Rev. Lett.
120,
156406
(2018)
10.1103/PhysRevLett.120.156406
[Ning et al.2022]PhysRevB.105.035103
Ning, Z.,
Zheng, B.,
Xu, D.-H.,
Wang, R.:
Photoinduced quantum anomalous Hall states in the topological Anderson insulator.
Phys. Rev. B
105,
035103
(2022)
10.1103/PhysRevB.105.035103
[Zhan et al.2022]PhysRevB.105.L081115
Zhan, F.,
Ning, Z.,
Gan, L.-Y.,
Zheng, B.,
Fan, J.,
Wang, R.:
Floquet valley-polarized quantum anomalous Hall state in nonmagnetic heterobilayers.
Phys. Rev. B
105,
081115
(2022)
10.1103/PhysRevB.105.L081115
[Zhan et al.2023]nanolett.2c04651
Zhan, F.,
Zeng, J.,
Chen, Z.,
Jin, X.,
Fan, J.,
Chen, T.,
Wang, R.:
Floquet engineering of nonequilibrium valley-polarized quantum anomalous Hall effect with tunable Chern number.
Nano Lett.
23(6),
2166–2172
(2023)
10.1021/acs.nanolett.2c04651
[Lu et al.2021]PhysRevB.103.195146
Lu, M.,
Zeng, J.,
Liu, H.,
Gao, J.-H.,
Xie, X.C.:
Valley-selective Floquet Chern flat bands in twisted multilayer graphene.
Phys. Rev. B
103,
195146
(2021)
10.1103/PhysRevB.103.195146
[Vogl et al.2021]PhysRevB.103.014310
Vogl, M.,
Rodriguez-Vega, M.,
Flebus, B.,
MacDonald, A.H.,
Fiete, G.A.:
Floquet engineering of topological transitions in a twisted transition metal dichalcogenide homobilayer.
Phys. Rev. B
103,
014310
(2021)
10.1103/PhysRevB.103.014310
[Rodriguez-Vega et al.2020]PhysRevResearch.2.033494
Rodriguez-Vega, M.,
Vogl, M.,
Fiete, G.A.:
Floquet engineering of twisted double bilayer graphene.
Phys. Rev. Res.
2,
033494
(2020)
10.1103/PhysRevResearch.2.033494
[Zhu et al.2021]PhysRevB.103.L041402
Zhu, W.,
Chong, Y.D.,
Gong, J.:
Floquet higher-order topological insulator in a periodically driven bipartite lattice.
Phys. Rev. B
103,
041402
(2021)
10.1103/PhysRevB.103.L041402
[Bomantara et al.2019]PhysRevB.99.045441
Bomantara, R.W.,
Zhou, L.,
Pan, J.,
Gong, J.:
Coupled-wire construction of static and Floquet second-order topological insulators.
Phys. Rev. B
99,
045441
(2019)
10.1103/PhysRevB.99.045441
[Seshadri et al.2019]PhysRevB.100.115403
Seshadri, R.,
Dutta, A.,
Sen, D.:
Generating a second-order topological insulator with multiple corner states by periodic driving.
Phys. Rev. B
100,
115403
(2019)
10.1103/PhysRevB.100.115403
[Peng and Refael2019]PhysRevLett.123.016806
Peng, Y.,
Refael, G.:
Floquet second-order topological insulators from nonsymmorphic space-time symmetries.
Phys. Rev. Lett.
123,
016806
(2019)
10.1103/PhysRevLett.123.016806
[Ghosh et al.2020]PhysRevB.101.235403
Ghosh, A.K.,
Paul, G.C.,
Saha, A.:
Higher order topological insulator via periodic driving.
Phys. Rev. B
101,
235403
(2020)
10.1103/PhysRevB.101.235403
[Hu et al.2020]PhysRevLett.124.057001
Hu, H.,
Huang, B.,
Zhao, E.,
Liu, W.V.:
Dynamical singularities of Floquet higher-order topological insulators.
Phys. Rev. Lett.
124,
057001
(2020)
10.1103/PhysRevLett.124.057001
[Huang and Liu2020]PhysRevLett.124.216601
Huang, B.,
Liu, W.V.:
Floquet higher-order topological insulators with anomalous dynamical polarization.
Phys. Rev. Lett.
124,
216601
(2020)
10.1103/PhysRevLett.124.216601
[Ning et al.2022]PhysRevB.105.L201114
Ning, Z.,
Fu, B.,
Xu, D.-H.,
Wang, R.:
Tailoring quadrupole topological insulators with periodic driving and disorder.
Phys. Rev. B
105,
201114
(2022)
10.1103/PhysRevB.105.L201114
[Du et al.2022]XLDU2022
Du, X.-L.,
Chen, R.,
Wang, R.,
Xu, D.-H.:
Weyl nodes with higher-order topology in an optically driven nodal-line semimetal.
Phys. Rev. B
105,
081102
(2022)
10.1103/PhysRevB.105.L081102
[Chen et al.2018]Chen2018PRBLNSM
Chen, R.,
Zhou, B.,
Xu, D.-H.:
Floquet Weyl semimetals in light-irradiated type-II and hybrid line-node semimetals.
Phys. Rev. B
97,
155152
(2018)
10.1103/PhysRevB.97.155152
[Wang et al.2023]ZMWangPRB
Wang, Z.-M.,
Wang, R.,
Sun, J.-H.,
Chen, T.-Y.,
Xu, D.-H.:
Floquet Weyl semimetal phases in light-irradiated higher-order topological Dirac semimetals.
Phys. Rev. B
107,
121407
(2023)
10.1103/PhysRevB.107.L121407
[Wang et al.2014]Wang_2014
Wang, R.,
Wang, B.,
Shen, R.,
Sheng, L.,
Xing, D.Y.:
Floquet Weyl semimetal induced by off-resonant light.
EPL
105(1),
17004
(2014)
10.1209/0295-5075/105/17004
[Liu et al.2019]PhysRevB.99.075121
Liu, H.,
Sun, J.-T.,
Meng, S.:
Engineering Dirac states in graphene: Coexisting type-I and type-II Floquet-Dirac fermions.
Phys. Rev. B
99,
075121
(2019)
10.1103/PhysRevB.99.075121
[Hübener et al.2017]hubener2017
Hübener, H.,
Sentef, M.A.,
De Giovannini, U.,
Kemper, A.F.,
Rubio, A.:
Creating stable Floquet-Weyl semimetals by laser-driving of 3D Dirac materials.
Nat. Commun.
8(1),
13940
(2017)
10.1038/ncomms13940
[Saha2016]PhysRevB.94.081103
Saha, K.:
Photoinduced Chern insulating states in semi-Dirac materials.
Phys. Rev. B
94,
081103
(2016)
10.1103/PhysRevB.94.081103
[Chan et al.2016]PhysRevB.94.121106
Chan, C.-K.,
Oh, Y.-T.,
Han, J.H.,
Lee, P.A.:
Type-II Weyl cone transitions in driven semimetals.
Phys. Rev. B
94,
121106
(2016)
10.1103/PhysRevB.94.121106
[Yan and Wang2016]PhysRevLett.117.087402
Yan, Z.,
Wang, Z.:
Tunable Weyl points in periodically driven nodal line semimetals.
Phys. Rev. Lett.
117,
087402
(2016)
10.1103/PhysRevLett.117.087402
[Narayan2016]PhysRevB.94.041409
Narayan, A.:
Tunable point nodes from line-node semimetals via application of light.
Phys. Rev. B
94,
041409
(2016)
10.1103/PhysRevB.94.041409
[Deng et al.2020]PhysRevB.102.201105
Deng, T.,
Zheng, B.,
Zhan, F.,
Fan, J.,
Wu, X.,
Wang, R.:
Photoinduced Floquet mixed-Weyl semimetallic phase in a carbon allotrope.
Phys. Rev. B
102,
201105
(2020)
10.1103/PhysRevB.102.201105
[Ezawa2017]PhysRevB.96.041205
Ezawa, M.:
Photoinduced topological phase transition from a crossing-line nodal semimetal to a multiple-Weyl semimetal.
Phys. Rev. B
96,
041205
(2017)
10.1103/PhysRevB.96.041205
[Li et al.2009]TAI-1
Li, J.,
Chu, R.-L.,
Jain, J.K.,
Shen, S.-Q.:
Topological Anderson insulator.
Phys. Rev. Lett.
102,
136806
(2009)
10.1103/PhysRevLett.102.136806
[Groth et al.2009]TAI-2
Groth, C.W.,
Wimmer, M.,
Akhmerov, A.R.,
Tworzydło, J.,
Beenakker, C.W.J.:
Theory of the topological Anderson insulator.
Phys. Rev. Lett.
103,
196805
(2009)
10.1103/PhysRevLett.103.196805
[Guo et al.2010]3DTAI
Guo, H.-M.,
Rosenberg, G.,
Refael, G.,
Franz, M.:
Topological Anderson insulator in three dimensions.
Phys. Rev. Lett.
105,
216601
(2010)
10.1103/PhysRevLett.105.216601
[Meier et al.2018]expTAI
Meier, E.J.,
An, F.A.,
Dauphin, A.,
Maffei, M.,
Massignan, P.,
Hughes, T.L.,
Gadway, B.:
Observation of the topological Anderson insulator in disordered atomic wires.
Science
362(6417),
929–933
(2018)
10.1126/science.aat3406
[Stützer et al.2018]PTAI
Stützer, S.,
Plotnik, Y.,
Lumer, Y.,
Titum, P.,
Lindner, N.H.,
Segev, M.,
Rechtsman, M.C.,
Szameit, A.:
Photonic topological Anderson insulators.
Nature
560(7719),
461–465
(2018)
10.1038/s41586-018-0418-2
[Titum et al.2015]Titum1
Titum, P.,
Lindner, N.H.,
Rechtsman, M.C.,
Refael, G.:
Disorder-induced Floquet topological insulators.
Phys. Rev. Lett.
114,
056801
(2015)
10.1103/PhysRevLett.114.056801
[Titum et al.2016]Titum2
Titum, P.,
Berg, E.,
Rudner, M.S.,
Refael, G.,
Lindner, N.H.:
Anomalous Floquet-Anderson insulator as a nonadiabatic quantized charge pump.
Phys. Rev. X
6,
021013
(2016)
10.1103/PhysRevX.6.021013
[Wauters et al.2019]wauters2019prl
Wauters, M.M.,
Russomanno, A.,
Citro, R.,
Santoro, G.E.,
Privitera, L.:
Localization, topology, and quantized transport in disordered Floquet systems.
Phys. Rev. Lett.
123,
266601
(2019)
10.1103/PhysRevLett.123.266601
[Zhang et al.2022]2022arpes
Zhang, H.,
Pincelli, T.,
Jozwiak, C.,
Kondo, T.,
Ernstorfer, R.,
Sato, T.,
Zhou, S.:
Angle-resolved photoemission spectroscopy.
Nat. Rev. Methods Primers
2(1),
54
(2022)
10.1038/s43586-022-00133-7
[Wang et al.2013]science.1239834
Wang, Y.H.,
Steinberg, H.,
Jarillo-Herrero, P.,
Gedik, N.:
Observation of Floquet-Bloch states on the surface of a topological insulator.
Science
342(6157),
453–457
(2013)
10.1126/science.1239834
[FahadMahmood et al.2016]2016Selective
FahadMahmood,
Chn-KChan,
ZhanykApchshv,
Donardnr,
Youn,
ParckA.,
Nuhdk:
Selective scattering between Floquet–Bloch and Volkov states in a topological insulator.
Nat. Phys.
12,
306–310
(2016)
10.1038/nphys3609
[Schüler et al.2020]PhysRevX.10.041013
Schüler, M.,
De Giovannini, U.,
Hübener, H.,
Rubio, A.,
Sentef, M.A.,
Devereaux, T.P.,
Werner, P.:
How circular dichroism in time- and angle-resolved photoemission can be used to spectroscopically detect transient topological states in graphene.
Phys. Rev. X
10,
041013
(2020)
10.1103/PhysRevX.10.041013
[Aeschlimann et al.2021]nanolett.1c00801
Aeschlimann, S.,
Sato, S.A.,
Krause, R.,
Chávez-Cervantes, M.,
De Giovannini, U.,
Hübener, H.,
Forti, S.,
Coletti, C.,
Hanff, K.,
Rossnagel, K.,
Rubio, A.,
Gierz, I.:
Survival of Floquet–Bloch states in the presence of scattering.
Nano Lett.
21(12),
5028–5035
(2021)
10.1021/acs.nanolett.1c00801
[McIver et al.2020]mciver2020light
McIver, J.W.,
Schulte, B.,
Stein, F.-U.,
Matsuyama, T.,
Jotzu, G.,
Meier, G.,
Cavalleri, A.:
Light-induced anomalous Hall effect in graphene.
Nat. Phys.
16(1),
38
(2020)
10.1038/s41567-019-0698-y
[Zhou et al.2023]2023Pseudospin
Zhou, S.,
Bao, C.,
Fan, B.,
Zhou, H.,
Gao, Q.,
Zhong, H.,
Lin, T.,
Liu, H.,
Yu, P.,
Tang, P.,
Meng, S.,
Duan, W.,
Zhou, S.:
Pseudospin-selective Floquet band engineering in black phosphorus.
Nature
614(7946),
75
(2023)
10.1038/s41586-022-05610-3
[Tyulnev et al.2024]2024valleytronics
Tyulnev, I.,
Jiménez-Galán, Á.,
Poborska, J.,
Vamos, L.,
Russell, P.S.J.,
Tani, F.,
Smirnova, O.,
Ivanov, M.,
Silva, R.E.,
Biegert, J.:
Valleytronics in bulk MoS_2 with a topologic optical field.
Nature
628(8009),
746
(2024)
10.1038/s41586-024-07156-y
[Mitra et al.2024]2024BN
Mitra, S.,
Jimenez-Galan, A.,
Aulich, M.,
Neuhaus, M.,
Silva, R.E.F.,
Pervak, V.,
Kling, M.F.,
Biswas, S.:
Light-wave-controlled Haldane model in monolayer hexagonal boron nitride.
Nature
628,
752
(2024)
10.1038/s41586-024-07244-z
[Wang et al.2018]2018Theoretical
Wang, Y.,
Claassen, M.,
Das Pemmaraju, C.,
Jia, C.,
Moritz, B.,
Devereaux, T.P.:
Theoretical understanding of photon spectroscopies in correlated materials in and out of equilibrium.
Nat. Rev. Mater.
3(9),
312–323
(2018)
10.1038/s41578-018-0046-3
[Giovannini and Hübener2019]Giovannini_2020
Giovannini, U.D.,
Hübener, H.:
Floquet analysis of excitations in materials.
J. Phys. Mater.
3(1),
012001
(2019)
10.1088/2515-7639/ab387b
[Rodriguez-Vega et al.2021]RODRIGUEZVEGA2021168434
Rodriguez-Vega, M.,
Vogl, M.,
Fiete, G.A.:
Low-frequency and Moiré–Floquet engineering: A review.
Ann. Phys.
435,
168434
(2021)
10.1016/j.aop.2021.168434
[Liu et al.2023]LIU2023100705
Liu, H.,
Cao, H.,
Meng, S.:
Floquet engineering of topological states in realistic quantum materials via light-matter interactions.
Prog. Surf. Sci.
98(2),
100705
(2023)
10.1016/j.progsurf.2023.100705
[Liu et al.2018]PhysRevLett.120.237403
Liu, H.,
Sun, J.-T.,
Cheng, C.,
Liu, F.,
Meng, S.:
Photoinduced nonequilibrium topological states in strained black phosphorus.
Phys. Rev. Lett.
120,
237403
(2018)
10.1103/PhysRevLett.120.237403
[Zhu et al.2023]PhysRevB.107.085151
Zhu, T.,
Wang, H.,
Zhang, H.:
Floquet engineering of magnetic topological insulator MnBi_2Te_4 films.
Phys. Rev. B
107,
085151
(2023)
10.1103/PhysRevB.107.085151
[Xu et al.2021]advs.202101508
Xu, H.,
Zhou, J.,
Li, J.:
Light-induced quantum anomalous Hall effect on the 2D surfaces of 3D topological insulators.
Adv. Sci.
8(17),
2101508
(2021)
10.1002/advs.202101508
[Mostofi et al.2014]Mostofi2014
Mostofi, A.A.,
Yates, J.R.,
Pizzi, G.,
Lee, Y.-S.,
Souza, I.,
Vanderbilt, D.,
Marzari, N.:
An updated version of wannier90: A tool for obtaining maximally-localised Wannier functions.
Comput. Phys. Commun.
185,
2309
(2014)
10.1016/j.cpc.2014.05.003
[Milfeld and Wyatt1983]PhysRevA.27.72
Milfeld, K.F.,
Wyatt, R.E.:
Study, extension, and application of Floquet theory for quantum molecular systems in an oscillating field.
Phys. Rev. A
27,
72–94
(1983)
10.1103/PhysRevA.27.72
[Gómez-León and Platero2013]PhysRevLett.110.200403
Gómez-León, A.,
Platero, G.:
Floquet-Bloch theory and topology in periodically driven lattices.
Phys. Rev. Lett.
110,
200403
(2013)
10.1103/PhysRevLett.110.200403
[Kim et al.2015]Kim15PRL
Kim, Y.,
Wieder, B.J.,
Kane, C.L.,
Rappe, A.M.:
Dirac line nodes in inversion-symmetric crystals.
Phys. Rev. Lett.
115,
036806
(2015)
10.1103/PhysRevLett.115.036806
[Yu et al.2015]Yu15PRL
Yu, R.,
Weng, H.,
Fang, Z.,
Dai, X.,
Hu, X.:
Topological node-line semimetal and Dirac semimetal state in antiperovskite Cu_3PdN.
Phys. Rev. Lett.
115,
036807
(2015)
10.1103/PhysRevLett.115.036807
[Weng et al.2015]WengH15PRB
Weng, H.,
Liang, Y.,
Xu, Q.,
Yu, R.,
Fang, Z.,
Dai, X.,
Kawazoe, Y.:
Topological node-line semimetal in three-dimensional graphene networks.
Phys. Rev. B
92,
045108
(2015)
10.1103/PhysRevB.92.045108
[Chan et al.2016]Chan16PRB
Chan, Y.-H.,
Chiu, C.-K.,
Chou, M.Y.,
Schnyder, A.P.:
Ca_3P_2 and other topological semimetals with line nodes and drumhead surface states.
Phys. Rev. B
93,
205132
(2016)
10.1103/PhysRevB.93.205132
[Maricq1982]Maricq1982PRB
Maricq, M.M.:
Application of average Hamiltonian theory to the NMR of solids.
Phys. Rev. B
25(11),
6622–6632
(1982)
10.1103/physrevb.25.6622
[Grozdanov and Raković1988]Grozdanov1988PRA
Grozdanov, T.P.,
Raković, M.J.:
Quantum system driven by rapidly varying periodic perturbation.
Phys. Rev. A
38(4),
1739–1746
(1988)
10.1103/physreva.38.1739
[Rahav et al.2003a]Rahav2003PRA
Rahav, S.,
Gilary, I.,
Fishman, S.:
Effective Hamiltonians for periodically driven systems.
Phys. Rev. A
68,
013820
(2003)
10.1103/PhysRevA.68.013820
[Rahav et al.2003b]Rahav2003PRL
Rahav, S.,
Gilary, I.,
Fishman, S.:
Time independent description of rapidly oscillating potentials.
Phys. Rev. Lett.
91,
110404
(2003)
10.1103/PhysRevLett.91.110404
[Goldman and Dalibard2014]Goldman
Goldman, N.,
Dalibard, J.:
Periodically driven quantum systems: Effective Hamiltonians and engineered gauge fields.
Phys. Rev. X
4,
031027
(2014)
10.1103/PhysRevX.4.031027
[Eckardt and Anisimovas2015]Eckardt2015NJP
Eckardt, A.,
Anisimovas, E.:
High-frequency approximation for periodically driven quantum systems from a Floquet-space perspective.
New J. Phys.
17(9),
093039
(2015)
10.1088/1367-2630/17/9/093039
[Bukov et al.2015]Bukov2015AP
Bukov, M.,
D'Alessio, L.,
Polkovnikov, A.:
Universal high-frequency behavior of periodically driven systems: from dynamical stabilization to Floquet engineering.
Adv. Phys.
64(2),
139–226
(2015)
10.1080/00018732.2015.1055918
[Taguchi et al.2016]Taguchi2016PRBa
Taguchi, K.,
Xu, D.-H.,
Yamakage, A.,
Law, K.T.:
Photovoltaic anomalous Hall effect in line-node semimetals.
Phys. Rev. B
94,
155206
(2016)
10.1103/PhysRevB.94.155206
[Wang et al.2019]PhysRevLett.123.186401
Wang, Z.,
Wieder, B.J.,
Li, J.,
Yan, B.,
Bernevig, B.A.:
Higher-order topology, monopole nodal lines, and the origin of large Fermi arcs in transition metal dichalcogenides XTe_2 (X=Mo,W).
Phys. Rev. Lett.
123,
186401
(2019)
10.1103/PhysRevLett.123.186401
[Wang et al.2020]PhysRevLett.125.126403
Wang, K.,
Dai, J.-X.,
Shao, L.B.,
Yang, S.A.,
Zhao, Y.X.:
Boundary criticality of 𝒫𝒯-invariant topology and second-order nodal-line semimetals.
Phys. Rev. Lett.
125,
126403
(2020)
10.1103/PhysRevLett.125.126403
[Zhao et al.2021]PhysRevLett.126.196402
Zhao, Y.X.,
Chen, C.,
Sheng, X.-L.,
Yang, S.A.:
Switching spinless and spinful topological phases with projective PT symmetry.
Phys. Rev. Lett.
126,
196402
(2021)
10.1103/PhysRevLett.126.196402
[Ahn et al.2018]PhysRevLett.121.106403
Ahn, J.,
Kim, D.,
Kim, Y.,
Yang, B.-J.:
Band topology and linking structure of nodal line semimetals with Z_2 monopole charges.
Phys. Rev. Lett.
121,
106403
(2018)
10.1103/PhysRevLett.121.106403
[Lee et al.2020]Lee2020
Lee, E.,
Kim, R.,
Ahn, J.,
Yang, B.-J.:
Two-dimensional higher-order topology in monolayer graphdiyne.
npj Quantum Mater.
5(1),
1
(2020)
10.1038/s41535-019-0206-8
[Chen et al.2022]PhysRevLett.128.026405
Chen, C.,
Zeng, X.-T.,
Chen, Z.,
Zhao, Y.X.,
Sheng, X.-L.,
Yang, S.A.:
Second-order real nodal-line semimetal in three-dimensional graphdiyne.
Phys. Rev. Lett.
128,
026405
(2022)
10.1103/PhysRevLett.128.026405
[Lin and Hughes2018]Mao2018PRB
Lin, M.,
Hughes, T.L.:
Topological quadrupolar semimetals.
Phys. Rev. B
98,
241103
(2018)
10.1103/PhysRevB.98.241103
[Wieder et al.2020]wieder2020strong
Wieder, B.J.,
Wang, Z.,
Cano, J.,
Dai, X.,
Schoop, L.M.,
Bradlyn, B.,
Bernevig, B.A.:
Strong and fragile topological Dirac semimetals with higher-order Fermi arcs.
Nat. Commun.
11(1),
1
(2020)
10.1038/s41467-020-14443-5
[Fang and Cano2021]yang2021classification
Fang, Y.,
Cano, J.:
Classification of Dirac points with higher-order Fermi arcs.
Phys. Rev. B
104,
245101
(2021)
10.1103/PhysRevB.104.245101
[Tyner et al.2021]Tyner2021quantized
Tyner, A.C.,
Sur, S.,
Zhou, Q.,
Puggioni, D.,
Darancet, P.,
Rondinelli, J.M.,
Goswami, P.:
Non-Abelian Stokes theorem and quantized Berry flux
(2021)
https://arxiv.org/abs/2102.06207v2arXiv:2102.06207v2
[Nie et al.2022]Nie2022HODSM
Nie, S.,
Chen, J.,
Yue, C.,
Le, C.,
Yuan, D.,
Wang, Z.,
Zhang, W.,
Weng, H.:
Tunable Dirac semimetals with higher-order Fermi arcs in Kagome lattices Pd_3Pb_2X_2 (X = S, Se).
Sci. Bull.
67(19),
1958
(2022)
10.1016/j.scib.2022.09.003
[Zeng et al.2023]zeng2022topological
Zeng, X.-T.,
Chen, Z.,
Chen, C.,
Liu, B.-B.,
Sheng, X.-L.,
Yang, S.A.:
Topological hinge modes in Dirac semimetals.
Front. Phys.
18,
13308
(2023)
10.1007/s11467-022-1221-y
[Li et al.2020]HOTDSM-EXP
Li, C.-Z.,
Wang, A.-Q.,
Li, C.,
Zheng, W.-Z.,
Brinkman, A.,
Yu, D.-P.,
Liao, Z.-M.:
Reducing electronic transport dimension to topological hinge states by increasing geometry size of Dirac semimetal Josephson junctions.
Phys. Rev. Lett.
124,
156601
(2020)
10.1103/PhysRevLett.124.156601
[Wang et al.2022]WANG2022788
Wang, A.-Q.,
Xiang, P.-Z.,
Zhao, T.-Y.,
Liao, Z.-M.:
Topological nature of higher-order hinge states revealed by spin transport.
Sci. Bull.
67(8),
788
(2022)
10.1016/j.scib.2022.02.003
[Wang et al.2020]Wang2020higher-order
Wang, H.-X.,
Lin, Z.-K.,
Jiang, B.,
Guo, G.-Y.,
Jiang, J.-H.:
Higher-order Weyl semimetals.
Phys. Rev. Lett.
125,
146401
(2020)
10.1103/PhysRevLett.125.146401
[Ghorashi et al.2020]Ghorashi2020higher-order
Ghorashi, S.A.A.,
Li, T.,
Hughes, T.L.:
Higher-order Weyl semimetals.
Phys. Rev. Lett.
125,
266804
(2020)
10.1103/PhysRevLett.125.266804
[Roy2019]Roy2019antiunitary
Roy, B.:
Antiunitary symmetry protected higher-order topological phases.
Phys. Rev. Res.
1,
032048
(2019)
10.1103/PhysRevResearch.1.032048
[Kargarian et al.2016]SFAofDSM
Kargarian, M.,
Randeria, M.,
Lu, Y.-M.:
Are the surface Fermi arcs in Dirac semimetals topologically protected?
Proc. Natl. Acad. Sci. U.S.A.
113(31),
8648
(2016)
10.1073/pnas.1524787113
[Xu et al.2024]Xu2024CPB
Xu, X.-X.,
Wang, Z.-M.,
Xu, D.-H.,
Chen, C.-Z.:
Photoinduced Floquet higher-order Weyl semimetal in C_6 symmetric Dirac semimetals.
Chin. Phys. B
33(6),
067801
(2024)
10.1088/1674-1056/ad4634
[Klitzing et al.1980]IQHE1980
Klitzing, K.v.,
Dorda, G.,
Pepper, M.:
New method for high-accuracy determination of the fine-structure constant based on quantized Hall resistance.
Phys. Rev. Lett.
45,
494–497
(1980)
10.1103/PhysRevLett.45.494
[Halperin1982]Halperin1982
Halperin, B.I.:
Quantized Hall conductance, current-carrying edge states, and the existence of extended states in a two-dimensional disordered potential.
Phys. Rev. B
25,
2185–2190
(1982)
10.1103/PhysRevB.25.2185
[Cage et al.2012]QHEbook
Cage, M.E.,
Klitzing, K.,
Chang, A.,
Duncan, F.,
Haldane, M.,
Laughlin, R.B.,
Pruisken, A.,
Thouless, D.:
The Quantum Hall Effect.
Springer,
New York
(2012)
[Liu et al.2020]Liuprl2020TAI
Liu, G.-G.,
Yang, Y.,
Ren, X.,
Xue, H.,
Lin, X.,
Hu, Y.-H.,
Sun, H.-x.,
Peng, B.,
Zhou, P.,
Chong, Y.,
Zhang, B.:
Topological Anderson insulator in disordered photonic crystals.
Phys. Rev. Lett.
125,
133603
(2020)
10.1103/PhysRevLett.125.133603
[Liu et al.2008]BHZ-2
Liu, C.,
Hughes, T.L.,
Qi, X.-L.,
Wang, K.,
Zhang, S.-C.:
Quantum spin Hall effect in inverted type-II semiconductors.
Phys. Rev. Lett.
100,
236601
(2008)
10.1103/PhysRevLett.100.236601
[Wu et al.2019]BHZ-3
Wu, F.,
Lovorn, T.,
Tutuc, E.,
Martin, I.,
MacDonald, A.H.:
Topological insulators in twisted transition metal dichalcogenide homobilayers.
Phys. Rev. Lett.
122,
086402
(2019)
10.1103/PhysRevLett.122.086402
[Bruus and Flensberg2004]bruusMB
Bruus, H.,
Flensberg, K.:
Many-body Quantum Theory in Condensed Matter Physics: an Introduction.
Oxford University Press,
New York
(2004)
[Benalcazar et al.2017a]BBH
Benalcazar, W.A.,
Bernevig, B.A.,
Hughes, T.L.:
Quantized electric multipole insulators.
Science
357(6346),
61
(2017)
10.1126/science.aah6442
[Benalcazar et al.2017b]BBH2017PRB
Benalcazar, W.A.,
Bernevig, B.A.,
Hughes, T.L.:
Electric multipole moments, topological multipole moment pumping, and chiral hinge states in crystalline insulators.
Phys. Rev. B
96,
245115
(2017)
10.1103/PhysRevB.96.245115
[Langbehn et al.2017]Langbehn2017PRL
Langbehn, J.,
Peng, Y.,
Trifunovic, L.,
Oppen, F.,
Brouwer, P.W.:
Reflection-symmetric second-order topological insulators and superconductors.
Phys. Rev. Lett.
119,
246401
(2017)
10.1103/PhysRevLett.119.246401
[Song et al.2017]Song2017PRL
Song, Z.,
Fang, Z.,
Fang, C.:
(d2)-dimensional edge states of rotation symmetry protected topological states.
Phys. Rev. Lett.
119,
246402
(2017)
10.1103/PhysRevLett.119.246402
[Schindler et al.2018]Schindler2018SA
Schindler, F.,
Cook, A.M.,
Vergniory, M.G.,
Wang, Z.,
Parkin, S.S.P.,
Bernevig, B.A.,
Neupert, T.:
Higher-order topological insulators.
Sci. Adv.
4(6),
0346
(2018)
10.1126/sciadv.aat0346
[Xie et al.2021]xie2021higher
Xie, B.,
Wang, H.-X.,
Zhang, X.,
Zhan, P.,
Jiang, J.-H.,
Lu, M.,
Chen, Y.:
Higher-order band topology.
Nat. Rev. Phys.,
1
(2021)
10.1038/s42254-021-00323-4
[Li et al.2020]li2020prl
Li, C.-A.,
Fu, B.,
Hu, Z.-A.,
Li, J.,
Shen, S.-Q.:
Topological phase transitions in disordered electric quadrupole insulators.
Phys. Rev. Lett.
125,
166801
(2020)
10.1103/PhysRevLett.125.166801
[Zhang et al.2021]zhang2021prl
Zhang, W.,
Zou, D.,
Pei, Q.,
He, W.,
Bao, J.,
Sun, H.,
Zhang, X.:
Experimental observation of higher-order topological Anderson insulators.
Phys. Rev. Lett.
126,
146802
(2021)
10.1103/PhysRevLett.126.146802
[Yang et al.2021]yang2021prb
Yang, Y.-B.,
Li, K.,
Duan, L.-M.,
Xu, Y.:
Higher-order topological Anderson insulators.
Phys. Rev. B
103,
085408
(2021)
10.1103/PhysRevB.103.085408
[Kang et al.2019]QMR1
Kang, B.,
Shiozaki, K.,
Cho, G.Y.:
Many-body order parameters for multipoles in solids.
Phys. Rev. B
100,
245134
(2019)
10.1103/PhysRevB.100.245134
[Wheeler et al.2019]QMR2
Wheeler, W.A.,
Wagner, L.K.,
Hughes, T.L.:
Many-body electric multipole operators in extended systems.
Phys. Rev. B
100,
245135
(2019)
10.1103/PhysRevB.100.245135
[Hong et al.2020]science.abb7023
Hong, Y.-L.,
Liu, Z.,
Wang, L.,
Zhou, T.,
Ma, W.,
Xu, C.,
Feng, S.,
Chen, L.,
Chen, M.-L.,
Sun, D.-M.,
Chen, X.-Q.,
Cheng, H.-M.,
Ren, W.:
Chemical vapor deposition of layered two-dimensional MoSi_2N_4 materials.
Science
369(6504),
670
(2020)
10.1126/science.abb7023
[Kechedzhi et al.2007]PhysRevLett.98.176806
Kechedzhi, K.,
Fal'ko, V.I.,
McCann, E.,
Altshuler, B.L.:
Influence of trigonal warping on interference effects in bilayer graphene.
Phys. Rev. Lett.
98,
176806
(2007)
10.1103/PhysRevLett.98.176806
[Rakyta et al.2010]PhysRevB.82.113405
Rakyta, P.,
Kormányos, A.,
Cserti, J.:
Trigonal warping and anisotropic band splitting in monolayer graphene due to Rashba spin-orbit coupling.
Phys. Rev. B
82,
113405
(2010)
10.1103/PhysRevB.82.113405
[Zeng et al.2017]PhysRevB.95.045424
Zeng, J.,
Ren, Y.,
Zhang, K.,
Qiao, Z.:
Topological phase transition from trigonal warping in van der waals multilayers.
Phys. Rev. B
95,
045424
(2017)
10.1103/PhysRevB.95.045424
[Joucken et al.2020]PhysRevB.101.161103
Joucken, F.,
Ge, Z.,
Quezada-López, E.A.,
Davenport, J.L.,
Watanabe, K.,
Taniguchi, T.,
Velasco, J.:
Determination of the trigonal warping orientation in Bernal-stacked bilayer graphene via scanning tunneling microscopy.
Phys. Rev. B
101,
161103
(2020)
10.1103/PhysRevB.101.161103
[Wu et al.2021]PhysRevB.104.195427
Wu, Y.-L.,
Zhu, G.-H.,
Yu, X.-Q.:
Nonlinear anomalous Nernst effect in strained graphene induced by trigonal warping.
Phys. Rev. B
104,
195427
(2021)
10.1103/PhysRevB.104.195427
[Wan et al.2024]PhysRevB.109.085148
Wan, X.,
Ning, Z.,
Xu, D.-H.,
Wang, R.,
Zheng, B.:
Photoinduced high-Chern-number quantum anomalous Hall effect from higher-order topological insulators.
Phys. Rev. B
109,
085148
(2024)
10.1103/PhysRevB.109.085148
[Sheng et al.2019]PhysRevLett.123.256402
Sheng, X.-L.,
Chen, C.,
Liu, H.,
Chen, Z.,
Yu, Z.-M.,
Zhao, Y.X.,
Yang, S.A.:
Two-dimensional second-order topological insulator in graphdiyne.
Phys. Rev. Lett.
123,
256402
(2019)
10.1103/PhysRevLett.123.256402
[Li et al.2010]B922733D
Li, G.,
Li, Y.,
Liu, H.,
Guo, Y.,
Li, Y.,
Zhu, D.:
Architecture of graphdiyne nanoscale films.
Chem. Commun.
46,
3256–3258
(2010)
10.1039/B922733D
[Huang et al.2024]huang2024
Huang, S.,
Zhan, F.,
Ding, X.,
Xu, D.-H.,
Ma, D.-S.,
Wang, R.:
Controllable Weyl nodes and Fermi arcs from Floquet engineering triple fermions
(2024).
<https://arxiv.org/abs/2408.05413>
[Shekhar et al.2015]shekhar2015extremely
Shekhar, C.,
Nayak, A.K.,
Sun, Y.,
Schmidt, M.,
Nicklas, M.,
Leermakers, I.,
Zeitler, U.,
Skourski, Y.,
Wosnitza, J.,
Liu, Z., :
Extremely large magnetoresistance and ultrahigh mobility in the topological Weyl semimetal candidate NbP.
Nat. Phys.
11(8),
645–649
(2015)
10.1038/nphys3372
[Liang et al.2015]liang2015ultrahigh
Liang, T.,
Gibson, Q.,
Ali, M.N.,
Liu, M.,
Cava, R.,
Ong, N.:
Ultrahigh mobility and giant magnetoresistance in the Dirac semimetal Cd_3As_2.
Nat. Mater.
14(3),
280–284
(2015)
10.1038/nmat4143
[Novoselov et al.2005]novoselov2005two
Novoselov, K.S.,
Geim, A.K.,
Morozov, S.V.,
Jiang, D.,
Katsnelson, M.I.,
Grigorieva, I.V.,
Dubonos, S.,
Firsov:
Two-dimensional gas of massless Dirac fermions in graphene.
Nature
438(7065),
197–200
(2005)
10.1038/nature04233
[Zhang et al.2005]zhang2005experimental
Zhang, Y.,
Tan, Y.-W.,
Stormer, H.L.,
Kim, P.:
Experimental observation of the quantum Hall effect and Berry's phase in graphene.
Nature
438(7065),
201–204
(2005)
10.1038/nature04235
[Ali et al.2014]ali2014large
Ali, M.N.,
Xiong, J.,
Flynn, S.,
Tao, J.,
Gibson, Q.D.,
Schoop, L.M.,
Liang, T.,
Haldolaarachchige, N.,
Hirschberger, M.,
Ong, N.P., :
Large, non-saturating magnetoresistance in WTe_2.
Nature
514(7521),
205–208
(2014)
10.1038/nature13763
[Chen et al.2016]chen2016extremely
Chen, F.,
Lv, H.,
Luo, X.,
Lu, W.,
Pei, Q.,
Lin, G.,
Han, Y.,
Zhu, X.,
Song, W.,
Sun, Y.:
Extremely large magnetoresistance in the type-II Weyl semimetal MoTe_2.
Phys. Rev. B
94(23),
235154
(2016)
10.1103/PhysRevB.94.235154
[Kumar et al.2017]kumar2017extremely
Kumar, N.,
Sun, Y.,
Xu, N.,
Manna, K.,
Yao, M.,
Süss, V.,
Leermakers, I.,
Young, O.,
Förster, T.,
Schmidt, M., :
Extremely high magnetoresistance and conductivity in the type-II Weyl semimetals WP_2 and MoP_2.
Nat. Commun.
8(1),
1642
(2017)
10.1038/s41467-017-01758-z
[Zhang and Shindou2017]2017PhysRevB.95.205108
Zhang, X.-T.,
Shindou, R.:
Transport properties of density wave phases in three-dimensional metals and semimetals under high magnetic field.
Phys. Rev. B
95,
205108
(2017)
10.1103/PhysRevB.95.205108
[Fujiyama et al.2022]2022PhysRevLett.128.027201
Fujiyama, S.,
Maebashi, H.,
Tajima, N.,
Tsumuraya, T.,
Cui, H.-B.,
Ogata, M.,
Kato, R.:
Large diamagnetism and electromagnetic duality in two-dimensional Dirac electron system.
Phys. Rev. Lett.
128,
027201
(2022)
10.1103/PhysRevLett.128.027201
[Ding et al.2020]Ding_2020
Ding, X.-Y.,
Zhang, C.,
Gan, L.-Y.,
Cao, Y.,
Chen, L.-L.,
Wang, R.:
Topological phase transition from T-carbon to bct-C16.
New J. Phys.
22(7),
073036
(2020)
10.1088/1367-2630/ab990b
|
http://arxiv.org/abs/2409.03484v1 | 20240905124819 | Exploring the magnetic and thermal evolution of a coronal jet | [
"Sushree S Nayak",
"Samrat Sen",
"Arpit Kumar Shrivastav",
"R. Bhattacharyya",
"P. S. Athiray"
] | astro-ph.SR | [
"astro-ph.SR",
"physics.plasm-ph",
"physics.space-ph"
] |
Sushree S. Nayak, Samrat Sen
0000-0002-4241-627X]Sushree S. Nayak
Center for Space Plasma & Aeronomic Research,
The University of Alabama in Huntsville,
Huntsville, Alabama 35899, USA
corresponding email ids: [email protected], [email protected]
0000-0003-1546-381X]Samrat Sen
Instituto de Astrofísica de Canarias, 38205 La Laguna, Tenerife, Spain
0000-0001-9035-3245]Arpit Kumar Shrivastav
Aryabhatta Research Institute of Observational Sciences, Nainital, India-263002
Joint Astronomy Programme and Department of Physics, Indian Institute of Science, Bangalore 560012, India
0000-0003-4522-5070]R. Bhattacharyya
Udaipur Solar Observatory, Physical Research Laboratory,
Dewali, Bari Road, Udaipur 313001, India
0000-0002-4454-147X]P.S. Athiray
Center for Space Plasma & Aeronomic Research,
The University of Alabama in Huntsville,
Huntsville, Alabama 35899, USA
NASA Marshall Space Flight Center, ST13, Huntsville, AL 35812, USA
§ ABSTRACT
Coronal jets are the captivating eruptions which are often found in the solar atmosphere, and primarily formed due to magnetic reconnection. Despite their short-lived nature and lower energy compared to many other eruptive events, e.g. flares and coronal mass ejections, they play an important role in heating the corona and accelerating charged particles. However, their generation in the ambience of non-standard flare regime is not fully understood, and warrant a deeper investigation, in terms of their onset, growth, eruption processes, and thermodynamic evolution. Toward this goal, this paper reports the results of a data-constrained three-dimensional (3D) magnetohydrodynamics (MHD) simulation of an eruptive jet; initialized with a Non-Force-Free-Field (NFFF) extrapolation and carried out in the spirit of Implicit Large Eddy Simulation (ILES). The simulation focuses on the magnetic and dynamical properties of the jet during its onset, and eruption phases, that occurred on February 5, 2015 in an active region NOAA AR12280, associated with a seemingly three-ribbon structure. In order to correlate its thermal evolution with computed energetics, the simulation results are compared with differential emission measurement (DEM) analysis in the vicinity of the jet. Importantly, this combined approach provides an insight to the onset of reconnection in transients in terms of emission and the corresponding electric current profiles from MHD evolutions. The presented study captures the intricate topological dynamics, finds a close correspondence between the magnetic and thermal evolution in and around the jet location. Overall, it enriches the understanding of the thermal evolution due to MHD processes, which is one of the broader aspects to reveal the coronal heating problem.
§ INTRODUCTION
Solar coronal jets are one of the fascinating eruptions from the solar atmosphere alongside flares and coronal mass ejections (CMEs). They are collimated plasma flows which are morphologically inverted-Y in shape, short-lived, and energetically 4-5 orders less than flares <cit.>. The study of jets is essential as they play an important role in heating the corona, and accelerating the solar wind <cit.>.
The generation mechanism behind these jets is primarily ascribed to magnetic reconnection which involves the reconfiguration of magnetic field topology while releasing stored magnetic energy into Ohmic heating, kinetic energy and accelerating the charged particles from the site of reconnection. Fully or partially open magnetic field lines near the coronal holes or at the boundary of active regions are observed as the favorable sites for ejecting jets while the triggering may be either through flux cancellation <cit.> or flux emergence <cit.>. In one of the important studies by <cit.>, a plausible scenario is reported for the eruption of coronal jets which are now also extensively applicable to many similar bursts of different energetic scales. According to these studies, a pre-existing minifilament escapes the canopy of surrounding open and closed field lines while reconnecting both at that site as well as at its own footpoints. The plasma material ejects out in that collimated channel and is dubbed as a spire carrying a swirling motion. These type of jets are popularly known as “blowout jets” <cit.>. In addition to that, jets are often associated with an initial bright point, also known as a jet bright point (JBP) <cit.> before achieving the fully grown structure.
Flare ribbons are inherent to solar eruptions. The well-known standard flare model a.k.a. the CSHKP model <cit.> pictures a solar eruption as a consequence of release of a magnetic flux rope above the polarity inversion line (PIL) from the solar atmosphere in two dimensions (2D). When the flux rope overcomes the tension force of overarching magnetic loops, that creates a current sheet below ensuing reconnection and producing two ribbons at the footpoints of the reconnected field lines. While this is the case for standard flares irrespective of their eruptive nature, there exists non-standard flares which produces different kind of ribbons namely circular/quasi-circular flare ribbons <cit.>, X-shape ribbons <cit.>, and three-ribbon types <cit.>.
These non-standard type ribbons or flares do not fit the argument of the standard flare model in explaining the flaring or eruptive process. In fact, in the state-of-art scenario, it is not fully understood about its onset, growth, and eruption processes, and merits attention to study. Besides, if they are associated with jet or CME, it is again challenging to comprehend the eruption process, formation of current sheets, or the reconnection affair in broad to capture in three-dimensional (3D) models. But, several attempts with an ensemble of multiple observations, and theoretical/numerical modeling are being made to understand aforementioned criticalities involved in the whole eruption process in these types of atypical events. Especially, when the responsible magnetic configuration in these events is analyzed to detect a potential reconnection triggering location, several potential configurations are reported as an initiation mechanism. For example, <cit.> has observed the formation of current sheets “due to stressing of spine” of the magnetic null point (ideally, |𝐁| =0) skeleton in their simulation which was associated with a quasi-circular flare ribbon. Likewise, <cit.> has explained the role of magnetic null point and quasi-separatrix layers (QSLs; locations bearing a sharp change in magnetic connectivities; <cit.>) in triggering a circular flare ribbon in the active region AR 12192 via magnetohydrodynamics (MHD) simulation initialized with a non-force-free-field (NFFF) extrapolation. Similarly, <cit.> discussed a three-ribbon flare produced due to reconnection along the coronal null line. In a different scenario of an X-shaped ribbon in <cit.>, a hyperbolic flux tube (HFT; intersection of two QSLs; <cit.>) was found in the non-linear-force-free-field (NLFFF) constructed magnetic topology where the dissipation of current sheets leads to ribbons.
With the above backdrop, in this paper, we focus on understanding the onset of an eruptive jet followed by an atypical flare ribbon type or majorly the reconnection process while analyzing the magnetic and thermal evolutions with a side-by-side comparison with observational dynamics.
The novelty of this approach lies in its envisaged effectiveness in emphasizing the triggering mechanism, the dissipation of magnetic energy, and decay of current and its implications; on the emission near the jet region.
We study a particular event of jet eruption which occurred on February 5, 2015, in AR12280 as reported in <cit.>. The above study was mainly concentrated to investigate the oscillation of a filament that was hit by an erupting jet. However, the exploration of magnetic field topology and the other magnetic properties (e.g. field line twist, field strength, ohmic heating, etc.) are not explored due to the constraint of spectroscopic imaging. The role of magnetic properties becomes significant at low plasma-β in solar corona, and hence it warrants an importance to study the magnetic properties in and around the jet during the eruption using MHD simulations. It also motivates us to enrich the general understanding of the onset process of transients.
The above discussions emphasize the role of magnetic topology and the sites of reconnection in 3D for any transients. However, direct and routine measurements of the coronal magnetic field are still unavailable. Therefore, magnetic field extrapolation has become an alternative to obtain the coronal magnetic field topology using available photospheric magnetic fields. Myriad numerical attempts have been made to perform realistic modeling of coronal magnetic field such as potential source surface models <cit.>, several force-free models <cit.>, non-force-free models <cit.>, magneto-frictional approaches <cit.>, and magnetohydrostatic models <cit.> which includes both static and time-dependent solutions. For further details on features of different coronal models, we direct the readers to <cit.> and references therein. In this work, we have adapted the non-force-free-field extrapolation model devised by <cit.> to construct the coronal magnetic field topology. Then, we have initiated a data-constrained MHD simulation with the NFFF extrapolated magnetic field using the EULAG (Eulerian/Semi-Langrangian) MHD model <cit.> to track the eruption process. In our study, we have given effort to deliver a collective analysis of the events using magnetic field extrapolation, MHD modeling, and comparison of the modeled parameters with the derived observables near the jet, and ribbons.
The EULAG-MHD code is based on an incompressible regime, and not capable of estimating thermodynamic evolution directly. This motivates us to use Differential Emission Measurement (DEM) technique to estimate the thermal evolution of the jet region and its surroundings from observational data. This method is extensively used to evaluate the thermodynamic information for various solar observations in general. Also, particularly for the coronal jet studies, it has been used to explore the thermodynamic evolution of the spire, footprints, and source regions <cit.>.
The remainder of the paper is as follows. Section <ref> outlines the event of an active region jet eruption that occurred. In Section <ref>, we have reported the results obtained from DEM analysis of the event. Section <ref> explains the rationale behind the NFFF extrapolation model and discusses the modeled extrapolated field of AR12280. Section <ref> details the EULAG-MHD model, required set-ups, and simulated results. Lastly, Section <ref> summarizes the key findings from the work, highlights the novelty of the study, and concludes how this work can be useful in a broader aspect to understand the solar atmosphere in future studies.
§ THE JET OBSERVED ON 2015 FEBRUARY 5 IN THE ACTIVE REGION AR12280
We have studied the jet in the active region AR12280 on 2015 February 5 peaked at ≈ 20:42 UT. In Figure <ref>, we have plotted the jet covering the whole active region in the field of view (FOV) in 304 Å pass-band of Atmospheric Imaging Assembly (AIA; <cit.>) onboard Solar Dynamics Observatory (SDO; <cit.>). Panel (a) corresponds to a time instance where no sign of jet activity is detected. Around ≈ 20:30 UT, we spotted an appearance of a bright point in the east side of the AR as marked by the white box in panel (b), which is referred to as JBP. This is a signature of the onset of the jet, which is developed afterward as shown in panel (c) around ≈ 20:46 UT. The jet base region is highlighted by the white box there. The legible three-ribbon structure can also be noticed. Panel (d) depicts the zoomed view of the jet highlighting its base region by the dotted white circle and the direction of the spire by the white arrow. An animation is provided showing the evolution of the jet in 304 channel. The animation starts at 20:24:07 UT and ends at 21:03:19 UT. We have plotted the B_z component of AR12280 at 20:24 UT obtained from Helioseismic Magnetic Imager (HMI; <cit.>) in Figure <ref>. Similarly, we have marked the jet base region in the black box and the bright point location in the white box respectively on the B_z map. The blue and red arrows indicate the direction of the transverse components on the B_z map plotted in gray scale. The green color contours are the location of polarity inversion lines (PILs) in the active region. The PILs are plotted using the routine developed by <cit.>. According to the routine, firstly, all of the pixels in the magnetogram are convolved with a gaussian kernel. Then, centering each pixel of B_z component, five consecutive pixels both in horizontal and vertical directions are scanned and maximum and minimum values are compared in both the directions of those five pixels. If they are found to be of opposite sign and magnitude of these two values exceeds the noise level of 60 Gauss (by trial and error method) on either side of the two arrays, the pixel in between them is located as PIL. In the right panel, we have plotted the magnetogram closer in time to the onset of the jet i.e. 20:36 UT as marked in Figure <ref>.
§ DEM ANALYSIS NEAR THE JET REGION
To understand the thermodynamic changes in the region of interests (ROIs) (see Figure <ref>: bright point region (white box in panel (b), jet base region (bigger white box in panel (c)), we determine differential emission measure (DEM) distributions using six optically thin emissions in AIA pass-bands. These channels have contribution from ionized states of iron with wavelengths 94 Å (Fe 10, Fe 18), 131 Å (Fe 8, Fe 21), 171 Å (Fe 9), 193 Å (Fe 12, Fe 24), 211 Å (Fe 14), and 335 Å (Fe 16). Their temperature response function peaks at log T[K] = (6.05, 6.85), (5.6, 7.05), 5.85, (6.2, 7.25), 6.3, and 6.45 respectively <cit.>. AIA channels are sensitive to a wide range of plasma temperatures. It is also established that the bi-modal temperature response of the hot AIA channels (94 Å , 131 Å , 193 Å ) can introduce systematic overestimation of emission measure at high temperatures <cit.>. We perform DEM analysis using 16 temperature bins from 5.4 ≤ log T ≤ 7.0, with δ log T = 0.1.
We have used the latest version of sparse inversion code <cit.> to obtain DEM, which accounts for the amount of plasma emission in particular temperature intervals integrated along our line of sight.
DEM (T) = n_e^2dl/dT ,
where l is the path length along the line of sight and n_e denotes the electron density.
The total emission measure can be calculated from DEM(T),
EM = ∫ DEM(T) × dT .
The DEM(T) in every temperature bin can be utilized to obtain emission measure weighted temperature as,
T_EM = ∫ T DEM(T) dT/∫ DEM(T) dT .
We have plotted the evolution of the EM weighted temperature extracted through DEM analysis in Figure <ref> for the whole active region. Panel (a) is the temperature distribution on the whole region. The bright point location is marked in panel (b). Later in panel (c), it can be seen that the hotter regions are exhibiting near the location of filament eruption. For further inspection, in Figure <ref>, we plotted the emission map in logarithmic scale in panel (a) highlighting the ROIs in black and red boxes.
In panel (b), we have plotted the average EM weighted temperature inside those two boxes till the decay phase of transients.
Important is the higher and a sharp ascent in the peak of the temperature near the bright point region than the jet base.
§ INITIAL MAGNETIC CONFIGURATION OF AR12280
§.§ The Non-Force-Free-Field Extrapolation Model
To investigate the onset of the jet eruption and the formation of multiple ribbon structures, we first obtained the initial magnetic field topology of the AR12280 using the Non-Force-Free-Field (NFFF) extrapolation model <cit.>. The NFFF model is theorized based on the minimum dissipation rate principle (MDR) where the idea uses total dissipation rate as a minimizer keeping the generalized helicity constant for a two-fluid description of plasma <cit.>. The relaxed state carries a non-zero Lorentz force which is used to drive the plasma in the MHD simulation later in our study. The extrapolation solves an inhomogeneous double-curl
Beltrami equation for the magnetic field 𝐁 (<cit.>, and references therein),
∇×∇×𝐁 + a_1∇×𝐁 + b_1𝐁 = ∇ψ,
where a_1 and b_1 are constants. The solenoidality of 𝐁 enforces
the scalar function ψ to obey Laplace's equation. The modified
vector 𝐁' = 𝐁 - ∇ψ satisfies the corresponding homogeneous
equation, which represents a two-fluid MHD steady state <cit.> having a solution <cit.>
𝐁' = ∑_i=1,2𝐁_i,
where each 𝐁 is a linear force-free field satisfying
∇×𝐁_i = α_i𝐁_i,
in usual notations. The two sets of constants are related by
a_1 = - (α_1+α_2) and b_1 = α_1α_2. The magnetic field is then
𝐁 = 𝐁_3 + ∑_i=1,2𝐁_i,
where 𝐁_3 = ∇ψ is a potential field.
We apply the technique described in <cit.>. Briefly, an optimal pair of α is
computed for 𝐁_3 = 0 by minimizing the average normalized
deviation of the magnetogram transverse field B_t from its
extrapolated value b_t, given by
E_n = ( ∑_i=1^M |𝐁_t,i - 𝐛_t,i| × |𝐁_t,i|) /(∑_i=1^M |𝐁_t,i|^2 ),
where M = N × N is the total number of grid points on the
transverse plane. Additional minimization of E_n is done by
using 𝐁_3 = ∇ψ as a corrector field for the obtained pair of α's.
Noting a superposition of potential fields results in potential
field only, the above procedure is iteratively repeated until
E_n plotted against the number of iterations asymptotically
saturates to a minimum. Also
important is to recognize that α_1 and α_2 , alone or combined,
are not the only parameters to determine field line twist <cit.>. With the
twist τ being related to the field-aligned current density
τ = 𝐉·𝐁/|𝐁|^2 = (α_1𝐁_1 + α_2𝐁_2) · (𝐁_1 + 𝐁_2 + 𝐁_3)/|𝐁|^2,
additional to any modifications in α_1 and α_2, τ can also vary
because of changes in the component fields— including the
potential field 𝐁_3, an advantage of the NFFF extrapolations.
We note that the values of α_1 and α_2 must be bounded above to
ensure a monotonous decay of 𝐁 with height <cit.>. If required, extra twist can be accommodated in the
extrapolated 𝐁 by varying 𝐁_1, 𝐁_2 and 𝐁_3 to better match the
magnetogram while not exceeding the maximal α_i's.
The justification of using the NFFF model lies in the following analysis. As per <cit.> on his study of plasma beta, β = p/p_mag (where, p and p_mag represent the plasma and magnetic pressure respectively) variation over the active regions, the force-free approximations on the photosphere does not hold true. There in Figure-3, only the mid corona, a sandwiched region between chromosphere and the upper corona/solar wind accelerated region is shown to have a β value of less than unity. One of the important arguments for the non-force-free-field approximation lies in the use of magnetic field. <cit.> have shown that the magnetic field is not force-free at the photosphere while becomes the same at a height around 400 km above the photosphere. <cit.> have studied the nature of the photopshere in 12 flare producing active regions and found it not to be much deviant from force-free nature. However this may not hold true always if there are twisted structures. In <cit.>, they also found similar arguments but in umbral or inner penumbral regions using high spatial resolution magnetograms from the Solar Optical Telescope/Spectro-Polarimeter on
board Hinode. Besides, for the available force-free extrapolation models based on optimization and MHD relaxation approaches, there can remain a finite residual Lorentz force during the minimization to an equilibrium state. As a remedy to this issue, a preprocessing technique is adapted in the algorithm where the photospheric magnetic field is adjusted. Whereas, the NFFF algorithm operates on observed magnetograms without any apriori preprocessing keeping the original nature of force at the photosphere. Again, the NFFF model does not claim to provide the whole active region to be non-force free. Furthermore, one rationale can be provided in terms of the strength of the Lorentz force, which depends on the magnetic field strength as well as the associated current density (which depends on the spatial gradient of the magnetic field components). Also, the orientation between B and J contribute to strength of the Lorentz force. Therefore, we may not always conclude the force-free or non-force-freeness of a magnetic field configuration based only on the magnetic field strength.
§.§ The NFFF Extrapolated Magnetic Field Topology
To extrapolate the magnetic field over AR12280, we have used the s magnetogram series of HMI. The data in this series consists of magnetic field strength (B), azimuth, and inclination in every 720 seconds with a full disk FOV. We have reproduced an HMI SHARP-like active region patch from B, azimuth, and inclination using the available in the package <cit.> with a slight modification in the coordinate system to it. The objective is to cover a field of view that encloses the base part of the jet sufficiently for the extrapolation while maintaining a flux balance over the cutout. The computational box has 768 uniform grid points or ≈ 276 Mm of physical extent in the x-direction and 512 grid points or ≈ 184 Mm in both y- and z- directions. The E_n has a value of ≈ .36 which amounts to ≈ 36% error in the reconstruction of the transverse field whereas the B_Z is the same observed magnetic field as stated in the Section <ref>. The Figure <ref> depicts the variation of magnetic field (|B|), current density (|J|), and Lorentz force (|J×B|) over the geometrical height normalized to their maximum value in logarithmic scale over the whole computational domain. We have also calculated β inside the jet base region for a pixel having the maximum total B of value ≈2790 G and found to be ≈ .44 at the photospheric level and for a field strength of ≈ 500 G, we found it to be 14. In conclusion, the active region does have a range of β and may not hold the force-freeness condition everywhere. Importantly, the structures in our extrapolated field lies over the PILs where a sharp gradient of the magnetic field connectivity is observed.
In Figure <ref>, we have plotted the magnetic field lines using the Visualisation and Analysis Platform for Ocean, Atmosphere, and Solar Researchers (VAPOR) software <cit.>. The field lines are plotted employing a highly accurate Field-line integration function of VAPOR, which relies on the adaptive line integration with a fourth-order Runge–Kutta scheme and tri-linear interpolation of field values over cells in the grid; described in Section 2.3.3 of <cit.>. In the extrapolated field, we find a flux rope near the jet base region, plotted in cyan color in the panel (a) of Figure <ref>, also highlighted in the chartreuse color box. To confirm the structure, we have plotted a direct volume rendering of the twist parameter (T_w) near the flux rope. For this purpose, we have used the code by <cit.>, also available at (http://staff.ustc.edu.cn/ rliu/qfactor.html). The twist value is measurement of two infintesimally close field lines winding about each other <cit.> and can be casted in the form according to <cit.>;
T_w = ∫_L𝐉·𝐁/|𝐁|^2 dl.
This flux rope here is a realization of the mini-filament found in case of jets as reported in <cit.>. We will discuss further dynamics in the MHD evolution in Section <ref>. Near the flux rope, we find the maximum twist to be of ≈ 1.33, shown in panel (c). The flux rope is found to be lying close to the bottom boundary. We have provided an inset there highlighting the location of the PIL, indicated by a magenta arrow, where the flux rope is detected. We find a bald patch type topology to the east of the flux rope, highlighted in the black box in panel (a). In panel (b), we have presented the 171 Å channel for the same field of view and at same time instance as panel (a) to mark the resemblance of the overall topology of the active region. We then compute the squashing factor or the Q-value near these regions utilizing the routine of <cit.>. We find the presence of QSLs in the vicinity of flux rope and bald patch. The footpoints of the bald patch are found to trace the high Q-value regions which is evident from the panel (d). The inset in panel (d) shows a flipped side view of the orientation of field lines in the bald patch.
§ FRAMEWORK OF THE EULAG-MHD MODEL AND THE NUMERICAL SET-UP FOR MHD EVOLUTION
To track the evolution of the field lines near the jet base region and its effect on the eruption process, we have performed an MHD simulation using the EULAG-MHD model. The model solves the incompressible Navier-Stokes MHD equations under the assumption of thermal homogeneity (or thermally inactive) and perfect electrical conductivity <cit.>. The governing equations are as follows:
∂𝐯/∂ t
+ (𝐯·∇) 𝐯 =-∇ p
+(∇×𝐁) ×𝐁+τ_a/τ_ν∇^2𝐯,
∇·𝐯=0,
∂𝐁/∂ t=∇×(𝐯×𝐁),
∇·𝐁=0,
written in usual notations. The variables in the MHD equations are normalized as follows
𝐁⟶𝐁/B_0, 𝐯⟶𝐯/v_a,
L ⟶L/L_0, t ⟶t/τ_a,
p ⟶p/ρv_a^2.
The constants B_0 and L_0 are generally arbitrary, but they can be fixed using the average magnetic field strength and size of the system. Here, v_a≡ B_0/√(4πρ_0) is the Alfvén speed and ρ_0 is the constant mass density. The constants τ_a and τ_ν represent the Alfvénic transit time (τ_a=L_0/v_a) and viscous dissipation time scale (τ_ν= L_0^2/ν), respectively, with ν being the kinematic viscosity. Utilizing the discretized incompressibility constraint, the pressure perturbation, denoted by p, satisfies an elliptic boundary-value problem on the discrete integral form of the momentum equation (Equation <ref>) <cit.> and the references within.
Here we discuss only essential features of the EULAG-MHD and refer readers to <cit.> and references therein. The model is based on the spatio-temporally second-order accurate non-oscillatory forward-in-time multidimensional positive definite advection transport algorithm MPDATA <cit.>.
Importantly, MPDATA has proven the dissipative property which, intermittently and adaptively, regularizes the under-resolved scales by simulating magnetic reconnections
and mimicking the action of explicit subgrid-scale turbulence models <cit.> in the spirit of
Implicit Large Eddy Simulations (ILES)
<cit.> scheme. Arguably, the residual numerical dissipation is then negligible
everywhere but at the sites of MRs. Moreover, this dissipation being intermittent in time and space, a quantification of it is meaningful only in
the spectral space where, analogous to the eddy viscosity of
explicit subgrid-scale models for turbulent flows, it only acts on the shortest modes admissible on the grid; in particular, in the vicinity of steep gradients in
simulated fields. Such ILESs conducted with the model have already been successfully utilized to simulate reconnections to understand their role in the coronal dynamics <cit.>. In this work, the presented computations continue to rely on the effectiveness of ILES in regularizing the under-resolved scales by the commencement of magnetic reconnections.
The initial magnetic field is supplemented from the NFFF extrapolation and the initial velocity field is set to 𝐯=0. The lateral boundaries (x, and y) are kept open aiming that the net magnetic flux should be conserved. While at the bottom boundary, the z-components of 𝐁 and 𝐯 are chosen to be fixed to their initial values (also termed as line-tied boundary condition) as the flux change during the transient activities is found to be minimum. The top boundary follows the same condition as bottom only exception of it not being fixed throughout the evolution. Also, except the bottom boundary, all variables are calculated by linearly extrapolating from the immediate neighborhood cell values. Notably, the field and the corresponding Lorentz force values at such height become extremely small compared to the counterparts at the lower boundaries <cit.>. As stated earlier in Section <ref>, the simulation is initially driven by the non-zero Lorentz force associated with the extrapolated magnetic field, and the flow is primarily generated by it. The resulting flow is however made incompressible following the Equation <ref>, an assumption also adapted by <cit.>. Since our focus is to understand the onset of the jet through the topological changes, the assumption seems to be justifiable in the tenuous coronal medium. One reminder is that the density is set to unity and being constant over both space and time. The computational box extension is of the same size as that of the extrapolation, but with ≈ 2 times reduced scaling of δ x. The spatial unit length δ x is .0052 and the time step δ t is set to 1× 10^-3 while satisfying the CFL condition <cit.>. The dimensionless coefficient or the kinematic viscosity τ_a/τ_ν≈ 2 × 10^-4 in the simulation is roughly ≈15 times larger than its coronal value <cit.>. To note, the parameter τ_a/τ_ν is controlled by the spatial resolution and the time step whilst satisfying the von Neumann stability criteria <cit.>. The larger τ_a/τ_ν, however, only expedites the evolution without an effect on the corresponding changes in the magnetic topology <cit.>. The total simulation time is 5400δ t. To compare with the observational time, we multiplied the total simulation time by 15. Then, it corresponds to a ≈ 40.5 minutes of the observational period.
§ RESULTS AND DISCUSSIONS
§.§ Dynamics in the field line topologies near all ROIs
We plotted the overall evolution of magnetic topologies near the periphery of the jet and ribbons in Figure <ref> vis-à-vis with the observed dynamics extracted from 131 Å channel of AIA for the whole simulation period. The animation associated with Figure <ref> is available online, that belongs from t = 20:24 to 21:03 UT (rendered with 13 s in the video). The timestamps mentioned on the 131 Å channel are near-cotemporal to the simulation snapshots. The first column with panels (a), (c), (e), (g), and (i) represents the evolution of field lines neighboring the jet and ribbons whereas the second column with panels (b), (d), (f), (h), and (j) are snapshots of progress of transients in 131 Å channel. Initially, the flux rope (yellow color) starts to untwist due to the action of Lorentz force. The ambient loops in red color close to the flux rope are seen to rise toward the eastern part of the active region along with the flux rope. A close correspondence between the loop dynamics and 131 channel can be noticed in panel (e) and (f) where the flux rope opens up ejecting the materials outward, similar to 131 channel. Afterward, the nearly potential yellow color loops in panel (g) and (h) show the post-reconnected loops. According to the standard picture of a jet in <cit.>, there are two types of reconnections takes place in a jetting process, the external reconnection which creates a passage for the flux rope to erupt and the internal reconnection involves the reconnection at its footpoints. Here, the escape of the materials is captured while the internal reconnection not so clearly may be owing to lack of spatial resolution and the location of the flux rope being close to the boundary. Although, we see some kinks in the flux rope, as marked by a circle in panel (f) of Figure <ref>. However, it is important to note that the simulation is magnetically driven whereas in reality, the jets could be due to the combined effect of both magnetically and thermal changes in the surrounding.
Elaborating the discussion on the untwist of the flux rope and the surrounding arcade further, an exclusive evolution of the flux rope is shown in Figure <ref> with a comparison of the filament evolution, overplotted with 304 Å channel on the background till the formation of ribbons. The untwisting of the flux rope (pink color) and footpoints of subsequently formed less potential loops which is also evident from the variation of mean of total current density (|𝐉|) over the simulation period plotted in Figure <ref>. They are also seen to follow the ribbons in the later phase of the simulation as shown in the panel (c) of Figure <ref>. The directions of Lorentz force and the flow vector are plotted in panels (b), (d), (f) of Figure <ref> and denoted in blue and yellow color arrows respectively. Absence of the plasma flow in the first panel is due to the setting to zero in the first time step in the MHD model as it was purely driven by the Lorentz force. Initially downward and inclined Lorentz force pushes the toroidal fluxes leading to untwisting of the flux rope. To focus it more, we investigate the orientation of the Lorentz force and flow vector at a fixed location within the flux rope as shown in Figure <ref> for different stages of the flux rope evolution. Similar to Figure <ref>, the blue arrow depicts the Lorentz force and the pink arrow depicts the flow vector. We appreciate the evolution of the directionality of the Lorentz force (at a fixed spatial location) more clearly as shown in Figure <ref>(a) to (d). This shows that the force remains nearly parallel to the flux rope axis during the untwisting phase of field lines. We also notice that the flow vector remains aligned nearly parallel to the flux rope axis and normal to the magnetic field lines at that fixed location during the untwisting phase of the field lines of the flux rope. These consistent directionalities during the untwisting phase of the evolution represent that the force and velocity flows in the vicinity of the flux rope play the role in the untwisting of the flux rope field lines. We also estimate the spatial average of twist density (|T_w|) over a selected region that contains the flux rope as shown in Figure <ref>. This is also supporting the trend of twist parameter evolution in Figure <ref> in Section <ref>.
Further supplementing the argument quantitatively, we have plotted the angles between the flow vector (v),and magnetic field (B) in solid line, and the Lorentz force (J × B) in dashed line respectively in Figure <ref>. The angles are computed as cos^-1(v·B)/(|v)||B|) and cos^-1(v·J×B)/(|v)||J×B|). In both cases, the angles are approaching 90^∘ toward the end of the evolution which is in congruent with the topological evolution seen in Figure <ref>. This simultaneously explains the loss of twist and the apparent rise of the field lines in the flux rope as well as the overarching loops there.
In Figure <ref>, we have plotted the evolution of the bald patch with 304 Å channel in the background in panels (a)-(d). There, yellow and blue contours in every panel represent positive and negative polarities respectively, and highlight the footpoint connectivities of the loops of the bald patch. In the background, we have plotted the squashing factor or Q-value to trace the QSL dynamics in panels (a1)-(d1) for the same time stamps as the first row. In panel (b), we have marked the initial bright point location with the green color box. The continuous reconnections near this bald patch along with flux rope eruption in the simulation are suspected to produce the apparent three ribbons later. Also, the footpoint evolutions in a slipping fashion follow the bright point as well as the ribbons as they expand later. Interestingly similar to our case, <cit.> in their work of a non-typical flare, have found a co-spatiality of flare ribbons and bald patch near the PIL from observations. In their numerical modeling, they found a remarkable agreement between the footpoints of bald patches to the ribbons. In our simulation, albeit the initial set-up of the evolution, the reconnection near the bald patch facilitates the formation of the multiple ribbons which eventually contributed to the jet base region alongside the untwining of flux rope. However, attributing the eruption of this flux rope from the jet base region to the consequent oscillation in the filament on the east side of the active region, as reported in <cit.>, is beyond the scope of the simulation presented here. Together, Figure <ref> and Figure <ref> show the onset of the jet eruption and the formation of the ribbons.
§.§ More energetics during the evolution near all the ROIs
Topological evolution aside, the energetics near the transients are other hints of the energy release mechanism. Hence, in Figure <ref> in panel (a), we have plotted the time variation of volume averaged magnetic energy density (MED; (B^2/8π)) and kinetic energy density (KED; v^2/2, also ρ =1) throughout the simulation period for the whole of the computational box. Important is the trend of decrease in the magnetic energy while the kinetic energy is sharply increasing at the expense of it during the first ≈ 4 minutes of the evolution, decreases between ≈ 5-8 minutes, and then it increases till 10 minutes. Later, both are evolving toward an almost quasi-steady state. Also, notable is that the initial value of the KED is zero as suggested from the numerical setup of the model in the Section <ref>. In panel (b), we have focused on the base of the jet. The sharp change in the magnetic energy density can be noticed there, ≈ 13% decline to its original strength in the 15 minutes of evolution time scale. However, within the same period of evolution, kinetic energy density is increasing with a rate of ≈ 16% from a motionless state while showing a fall in between ≈ 4-8 minutes due to viscous dissipation. Further, in the panel (c), we noticed an interesting variation near the neighborhood of the bright point and observed remarkable changes in both MED and KED, where the transients are triggered. The overall decrease in MED during first ≈ 10 minutes is due to the continuous reconnections near the bald patch. This dissipation may be contributing to the increase in KED. Noteworthy is the peak in the KED after ≈ 12 minutes of the start time. The early peak in KED might be responsible for enhancing the twist in the field lines near the bright point as well as in the neighborhood of the flux rope adding more flux to it, which can be seen from panel (b) of Figure <ref>. Subsequently, the increasing twist in the field lines assists in increasing the magnetic energy near those locations, which is discernible from the peak of magnetic energy t≈ 17 minutes in Figure <ref>(c) after ≈ 5 minutes to that of kinetic energy. Important is also the continuous reconnection near the bald patch which facilitates the dissipation of magnetic energy along with the triggering of the jet. The KED over every region is dissipated by the action of viscous relaxation. The feedback sharing between the MED and KED is significant in the jet region and more in the bright point region. This suggests a higher rate of dissipation of magnetic energy and conversion to kinetic energy near the locality of the jet base region and the bright point location. Here, we also draw the attention of the reader toward the efficacy of the ILES scheme of EULAG-MHD model being majorly effective at the locations of primary reconnection sites, whereas not contributing to the other part of the computational box.
Next in Figure <ref>, we have plotted the profile of average of Lorentz force density (|𝐉× 𝐁|) for all ROIs. The profiles are denoted in solid line for the whole of AR, dotted line for the jet region, and the dashed line for the bright point. We observe an initial fall till the first couple of minutes as the |𝐉× 𝐁| is expended on the generation of the plasma flow which can be seen in the trend of KED in panel (a) of Figure <ref>. A strikingly rise |𝐉× 𝐁| is seen near the jet region. This might be due to the effect of prior increase of KED which pushes the field lines into the jet region while injecting more twist to flux rope as well as the surrounded arcades. But, it then decreases rapidly due to the unwinding of the flux rope and dissipation near the bright point. The trend of (|𝐉× 𝐁|) is however relatively similar to that of the complete AR till ≈ 17 minutes the evolution. The continuous reconnection near the bald patch may inhibit the sharp development of Lorentz force near the bright point. Furthermore, in Figure <ref>, we have plotted the variation of mean of twist parameter for the three ROIs. Similar to Figure <ref>, we have maintained the same legends for three ROIs. The twist in the whole region is decreasing throughout the simulation period whereas, in the case of the jet base region, it increases till ≈ 8 minutes of the total time, then decreases and maintains almost constancy till the end of the evolution.
Next, Figure <ref> shows the evolution of mean of free energy again for the same ROIs keeping the same legends as the above Figure. We have estimated the free energy in the following way;
E_free = 1/V∫_V{(B^2/8π)_Sim|_t - (B^2/8π)_Pot|_t=0} dV,
where, V is the total volume of our simulation box. The first term on the R.H.S of Equation <ref> refers to the non-potential energy density at each time step. The second term refers to the potential energy density calculated at the initial time step, as the potential magnetic field is the minimum energy state <cit.>. The profiles for the whole AR and jet region are seen to decrease till the end of the evolution. However, for the bright point it shows an increement between ≈ 7-12 minutes of the simulation period.
To investigate the energetics further near these locations during the onset and afterward, in Figure <ref> we have provided the evolution of volume averaged total current density (|𝐉|), z-component of 𝐉, |𝐉|/|𝐁|, and the J_trans or √(J_x^2+J_y^2) for all the locations stated in the above paragraphs. In panel (a), all the parameters show a monotonous decrease. Near the jet base region and the bright point location, in panel (b) and (c), J and J_z show a fall during the initial few minutes, then start picking up due to twisting of the field lines. Interestingly, J_z shows an increase in the bright point neighborhood during the peak of the eruption phase which was also the case for magnetic energy density in panel (c) of Figure <ref>. Interestingly, we observe similar growth in the average temperature over the complete activity period in the DEM averaged temperature map, in panel (c) of Figure <ref>. This survives till the start of the decay phase of the eruption. After ≈ 30 minutes of evolution, both are decreasing in a monotonous fashion. However, we fail to capture any abrupt changes in |𝐉|/|𝐁| during the evolution in any of the regions, particularly near the jet base region. The reason may be partially due to the lower spatial resolutions of computation. However, locations of current sheet are not found in the extrapolation resolution scales as well which was of nearly twice of the simulation. Another overt reason can be accredited to the boundary condition employed at the bottom boundary as line-tying.
§ SUMMARY
In this work, we investigate the triggering of an active region jet by magnetic field extrapolation, and its subsequent magnetohydrodynamics evolution through numerical simulation. We have also analyzed the temperature evolution based on the emission measurement from observation near the transients. Then we compare the temporal evolution of temperature (derived from observation) with the heating implication based on MHD simulation. Besides, we have explored the formation of three ribbons on the base of the jet which are atypical.
First, we used the non-force-free-field magnetic field extrapolation to obtain the initial magnetic field topology of the active region. The extrapolated field provides a non-zero Lorentz force in the region at least to a certain height in the lower atmosphere, afterward achieving a nearly force-free state in the upper atmosphere of the computational box. The top boundary of the computational box reaches ≈ 184 Mm height of the atmosphere. The overall magnetic topology agrees well with the loop morphology from the observation. Near the jet base region, we found a bald patch triggering the jetting process and multiple ribbons. Additionally, a flux rope, though low-lying, was found in the vicinity of the bright point.
To understand the onset of the jet and the formation of the three ribbons, we utilized the NFFF extrapolated field as an initial state to a data-constrained EULAG-MHD simulation with a line-tying boundary condition on the bottom boundary covering the period of jet eruption. The findings from the simulation are summarized below.
* The study provides topological evolution of magnetic field in the neighborhood of the jet, particularly field line evolutions near the flux rope and the bald patch. The untwisting of field lines releases the jet materials along the direction of the plasma flows and ultimately contributes toward the eruption of the jet. The onset of the jet starts due to reconnection in the bald patch which is different from a traditional jet onset mechanism.
* The simulation simultaneously focuses on the formation of multiple flare ribbons where the same reconnection near the bald patch and the eruption of the flux rope generate multiple ribbons due to the transfer of energetic particles to the lower atmosphere. The footpoint evolution of these topologies are in good agreement with the observed ribbon locations.
* Further discussions on the variation of the magnetic and kinetic energies on different parts of the jet not only elucidate the triggering of the jetting process but also validate the credibility of the simulation performed. A detailed focus on the variation of energy densities and the active driving of Lorentz force and modulation by the plasma flow over time near both the jet base and bright point regions explain the relaxation of magnetic energy and kinetic energy in facilitating the triggering and eruption process. These parametric study highlight the importance of the onset region in order to understand magnetic reconnection.
* Additional highlights are on the profile of the mean of total current and current components which shed light on the current dissipation near those sites. A qualitative comparison with dynamics from the DEM analysis also falls in the line of argument of onset of the jet. We observe a hotter base and cooler spire in the DEM analysis for the jet. The temperature and emission profile near the bright point from the observation, the topological evolution, and the profiles of the simulated energetics comply with the onset process of the jet remarkably. This motivates further to focus on both the magnetic and thermal structure of a jet in order to understand the energy transfer through the jet and as well as in the atmosphere globally. We noticed a key aspect from the evolution of the current profiles is that the onset point is a maximum current carrying site than the overall transient affected area. However, the observational and/or numerical modeling artifacts are not ignored. Thus, directly quantifying the amount of dissipation into the ohmic heating in these locations may not be so accurate.
Yet, the presented simulation could not (a) capture the formation of the current sheet as similar to the 2D model proposed by <cit.>, and as discussed in <cit.>. This may be due to the resolution limit and the boundary conditions adapted in our simulation, and (b) delineate the complete thermodynamics of the jet region due to utilization of an incompressible and thermally inactive model. The jets are ubiquitous and impact not only the local atmosphere but also influence the solar wind generation based on their source of origins. Hence, in our future work, we aim to attempt a more realistic simulation covering all these shortcomings. We will also approach a data-driven boundary condition to the simulation to understand the impact on regularization of both the onset and eruption process of such events, particularly focusing on the genesis of these multiple ribbons.
1cm
AIA/SDO, HMI/SDO
Solarsoft <cit.>, VAPOR/NCAR
We thank the anonymous referee for the insightful comments and suggestions which improved the manuscript considerably. The extrapolation and simulations are performed in the Bladerunner cluster located in the Center for Space Plasma and Aeronomic Research department of the University of Alabama in Huntsville. We acknowledge the use of the visualization software VAPOR (www.vapor.ucar.edu) for generating relevant graphics. Data and images are courtesy of NASA/SDO and the HMI and AIA science teams. SDO/HMI is a joint effort of many teams and individuals to whom we are greatly indebted for providing the data. We thank Ranadeep Sarkar for sharing the PIL calculation code. S.S.N. acknowledges NSF-AGS-1954503, NASA-LWS-80NSSC21K0003, and 80NSSC21K1671 grants. S.S. acknowledges support by the European Research Council through the Synergy Grant #810218 (“The Whole Sun”, ERC-2018-SyG). A.K.S. is supported by funds of the Council of Scientific & Industrial Research (CSIR), India, under file no. 09/079(2872)/2021-EMR-I. We are grateful to Qiang Hu for his valuable discussions on development of the manuscript.
|
http://arxiv.org/abs/2409.02735v1 | 20240904140640 | RTFM: How hard are IoT platform providers making it for their developers? | [
"Andrew Baldrian",
"Joseph Hallett"
] | cs.CR | [
"cs.CR",
"C.3"
] |
Oxygen Isotope Exchange Between Dust Aggregates and Ambient Nebular Gas
[
September 4, 2024
=======================================================================
§ INTRODUCTION
IoT devices are everywhere, but have a reputation for being insecure <cit.>.
Providing usable documentation and example code is known to help developers implement security features <cit.>; so to explore why IoT devices might be insecure we looked at the resources IoT platforms provide to help developers implement security features.
Internet connected devices are expected to exceed 29 billion units by the end of 2030 <cit.>. These IoT devices are used for a multitude of applications: for example, roadside weather monitoring, video doorbells, and internet-enabled fridge-freezers <cit.>.
Concerns have been raised around the security of IoT devices. In 2016, the Mirai botnet infected over 600 thousand devices simply by exploiting common default passwords used by IoT manufacturers <cit.>. Increasingly these devices are seen as being vulnerable <cit.>.
Multiple organisations have publish recommendations or standards to minimise these vulnerabilities <cit.>.
As the use of this technology grows, the world of cyber physical systems and IoT is also interacting with the introduction of the IIoT <cit.>; further increasing the reach of these devices.
The explosion in the number of IoT devices and the concerns around their possible security vulnerabilities has fueled the need for regulation. For example, the UK Product Security and Telecommunications Infrastructure (Product Security) regime <cit.>, or the EU Cyber Resilience Act (EU-CRA)<cit.>.
These regulations attempt to address these security and privacy concerns by defining a set of minimal requirements that an IoT device should conform to <cit.>.
While the regulations may be welcome by industry bodies and governments <cit.>, they may also increase the cost of developing these devices with a possible knock-on effect of increasing the costs to the consumer <cit.>.
Engineers building these devices must address these requirements or IoT device manufacturers may lose access to geographical markets.
IoT devices are built using existing processors, communication peripherals, sensors and actuators.
Chip manufacturers have recognised the need for an IoT platform that can be used by IoT device manufacturers to construct these IoT devices.
These platform manufacturers deliver IoT platforms with three core components: processors, memory and communications.
To achieve the regulatory goals, device engineers need IoT platforms to also provide a set of security features and the means to develop solutions that meet the requirements of the regulations.
This paper reviews some of the leading IoT platform manufacturers, looking at the security features they provide and the documentation available to engineers and the supporting material such as code examples, to help them meet these goals;
focusing on three security features, necessary to implement many of the regulatory goals:
* Secure boot
* Device identity key
* Unique per device password.
This paper asks the following research questions:
RQ1. To what extent are IoT platforms providing hardware functionality for device engineers to implement basic IoT security tasks?
RQ2. How are these platform manufacturers supporting device engineers to use these features correctly?
RQ3. What additional support is provided to help device engineers take advantage of security features to meet their goals?
We find that whilst secure boot support is relatively common, functionality for other features is hidden (RQ1) and that in all cases documentation and code examples (if any are given) are relatively poor (RQ2). Cloud services or third party software my be used to provide basic security features but do not meet many of the additional regulatory requirements (RQ3).
This suggests that more needs to be done to support developers if we want to increase the adoption of security features.
§ STANDARDS AND SECURITY RECOMMENDATIONS
The diversity of IoT devices and their development in multiple markets results in multiple standards <cit.> covering multiple different markets, for example with medical devices, the Hippocratic Oath for Connected Medical Devices <cit.> or for telecommunications the GSMA CLP.13 - IoT Security Guidelines Endpoint Ecosystem <cit.>.
Standards such as ETSI EN 303 645 Cyber Security for Consumer Internet of Things: Baseline Requirements <cit.>, or NIST IR 8259A Core Device Cybersecurity Capability Baseline <cit.> provide a technical baseline set of requirements for device engineers to follow.
Often these standards will reference related standards that focus on information processing or privacy, for example ISO/IEC 27002 Information security, cybersecurity and privacy protection — Information security controls <cit.>.
While these standards may not be directly focused on IoT devices, they do provide requirements for data processing and data privacy.
This results in a complex set of security and privacy standards for device engineers to navigate.
To help device engineers manage these different sets of requirements, a number of organisations have published security frameworks.
For example, the IoT Security Foundation provide Secure Design Best Practice Guides and an IoT Security Assurance Framework <cit.>, providing a process that device engineers can follow to enable them to build compliant devices.
PSA Certified provides a certification program and security framework for device engineers to follow <cit.>.
The GSMA has an IoT Security Assessment with a checklist <cit.> of different requirements that device engineers should consider when designing an IoT device.
From a review of 13 IoT standards, we have selected three security features mentioned in multiple standards (Table <ref>). These security features are not sufficient for an IoT deployment, but serve as a indication of the abilities of the IoT platform.
The security features are selected to highlight the differing types of feature that device engineers may face when implementing an IoT device.
The security features may also depend on specific hardware functionality that must be available in the IoT platform to implements this security feature.
§.§ Secure boot
The boot process is the act of initialising the hardware and loading the firmware into memory when the IoT device is turned on.
The notion of a secure boot process requires that the firmware can be cryptographically validated as part of a root of trust <cit.> process.
This is normally implemented using a TEE, such as the Arm TrustZone <cit.>.
The boot sequence can be a single or two-stage process:
§.§.§ Load the boot loader
The processor has a built-in immutable boot loader; its job is to validate the signature of the boot loader and load the second stage boot loader into memory.
§.§.§ Load the firmware
The second stage boot loader will then validate the signature on the firmware application, load it into memory and start the application.
The signature validation and the public keys needed for the validation are all managed by the TEE. The public keys or certificates are written into the TEE in the IoT device manufacturing process.
There may also be the possibility to securely update a public key or revoke a key.
The IoT device manufacturer retains the private key that is used to digitally sign the firmware.
As long as the private signing key remains secure, this process should guarantee that only the device manufacturer can update the firmware and therefore the firmware can be trusted.
However some issues still remain, for example if the device manufacturer has not removed the debug interface, it may be possible to use this interface to change or add new keys <cit.>.
The secure boot process is part of the root of trust <cit.>, proving that the software running on the device is as expected and the keys being used are certified by the PKI.
Without a secure boot process, the software on the IoT device could have been modified, contain malware, or have modified keys.
§.§ Device identity key
In addition to each device having a unique identification number or string, a device should be able to authenticate its identity as part of an attestation process.
This requires the IoT device to have a private key that it can use to digitally sign a message.
Any entity communicating with the IoT device can use the public key of that IoT device to validate the digital signature and so can confirm the identity of the IoT device as the IoT device must be in possession of the private key.
This private key is known as the device identity key.
The key can be written into the IoT device when it is manufactured or at a later stage, known as device provisioning.
A public-private key pair is generated, with the private key being stored within the TEE, and the public key shared with multiple third parties.
A certificate for the public key can also be generated and signed by a trusted CA.
The private key is then used to generate a signature to attest ownership of the identity, for example using the PSA Attestation API <cit.>.
The process of generating the keys may be done as part of the manufacturing process, as part of device provisioning, or when a device enlists in a service.
Platform manufacturers that provide an IoT cloud platform may have a provisioning process, in which a public and private keys are generated (possibly on the device) and then the private key is stored in the TEE.
The public key certificate is then stored in the cloud service as part of the device description and is later used in the authentication process.
Other platform manufacturers provide tools that allow the IoT device manufacturer to prevision a device themselves.
The platform manufacturers may also ship with a private key embedded in microprocessor or TEE, with the public key being included as part of the documentation.
The IoT device manufacturer can then register the public key for the device with cloud services.
§.§ Unique per device password
An IoT device may require the user to log in to an administration interface to configure certain features.
This administration account should not use a default or well known device password, but should be secured with a unique per device password that is affixed to the device.
Users should be able to change this default password. This new password should be stored securely, adhering to standards such as NIST SP 800-63 standard <cit.>. The majority of users may not change their default password <cit.>, making it more important that any unique per device password follow good password practice, for example NIST SP 800-63B <cit.>, or three random words <cit.>.
Factory resetting the device should also reset the password back to the unique per device password.
The unique per device password should be generated at manufacturing time and stored in the IoT device, possibly in the TEE depending on the hardware functionality.
The unique per device password is not a cryptographic key, it should be something that a users should be able to remember, but that an attacker should find hard to guess.
The unique per device password should be generated randomly and not be influenced by known information such as the device ID or MAC address, or follow a pattern for each device.
§ METHOD
To generate a list of manufacturers and IoT platforms, the initial list of manufacturers was derived from the authors' knowledge of the IoT industry.
For each platform in the initial list, an internet search was made to find competitors or similar IoT platforms from other manufacturers.
We focused on devices that may be used within the consumer market, these devices tend to be initially configured by a user connecting with a phone via Bluetooth to the IoT device.
Once the initial configuration is done, the IoT device will then connect to the Wi-Fi network.
Therefore, we selected devices that have both Wi-Fi and Bluetooth functionality.
We also limited the selection to IoT platforms that use a SoC 32bit microprocessor.
We only included devices where the platform manufacturer has provided a development or prototyping board.
These prototyping boards give IoT device engineers the ability to explore the functionality of the IoT platform and build the application without needing to design and build the IoT device PCB.
The prototyping boards come with debug and programming interfaces allowing them to be connected to a PC. This make the platform more accessible and allows the platform manufacturer to target a wider user base.
For the review we limited the platform selection to the following criteria:
* Platform manufacturers targeted a specific product for the IoT device market.
* The platform was a SoC with a 32bit microprocessor.
* The platform included both Wi-Fi and Bluetooth connectivity.
* A development or prototyping board was available to evaluate the functionality of the SoC IoT platform.
* The documentation was available without needing to be a member of a trade organisation or signing any form of NDA.
The list of IoT platforms reviewed is shown in Table <ref>. For each platform we reviewed the following:
* The prototyping board data-sheet, the SoC data-sheet and any additional or peripheral data-sheets.
* The IDE.
* Programming guides for the following:
* General security coding guidance or good practice.
* Secure boot, root of trust and cryptographical operations.
* Device identity keys and key management.
* Generating unique per device passwords, password management or storage.
* Guidance on storage of sensitive information such as passwords.
* Code examples involving any aspect of the three security features.
Having completed the initial review, the evaluation criteria were constructed, and used to evaluate each set of material.
§.§ Evaluation criteria
The review includes documentation, code examples, libraries and tools provided by the platform manufacturers.
Where the platform manufacturers have included direct links to the documentation provided by the manufacturer of a microprocessor or secure element used in the IoT platform, we have included them in the documentation set for the IoT platform manufacturer.
We have not included code examples from other third parties, blog posts, or post community sites (including community sites owned by the platform manufacturers).
We placed each review item into the following categories based on the criteria below:
§.§.§ Supporting Documentation
The documentation provides an introduction to the security feature, providing the reader with an understanding of the security feature and why it is important for the IoT developer to include the security feature in their implementation.
§.§.§ Standards or regulations
The documentation refers to the relevant recommendation, standard, regulation or certification for the security feature.
The documentation explains why the standard is relevant, how the standard should be applied and any additional consideration or actions that the device engineer should perform.
§.§.§ Technical detail
The documentation provides a detailed technical description of the security feature. The documentation includes implementation detail and explains the internal operation of the security feature. Technical detail include data-sheets describing how the hardware supports the security feature.
§.§.§ Developer support
The documentation provides detail of how the security feature can be implemented using the IoT platform hardware or software, including API documentation, examples of code, and configuration settings.
The documentation has a step-by-step guide to using the platform to achieve the security feature.
§.§.§ Code examples
The platform manufacturer has provided example source code of how the security feature can be implemented using the hardware.
This should also include instructions for any device configuration, building and running the example code.
This example source code should also follow security best practice.
§.§.§ Library
The platform manufacturer provides a library that supports the implementation of the security feature.
This library may include an API that the device engineer can use to gain access to the security feature.
§.§.§ Ancillary code
The platform manufacturer provides additional libraries, code or tools that can be used to support the implementation of the security feature. For example, additional key management, generation of strong passwords, or certification management.
§.§ Threats to validity
There are many manufacturers providing platforms for the IoT market, often with multiple variations of each product, some targeting niche markets or very specific requirements.
Manufacturers may also provide a more componentised approach where different peripherals are added to a base computing platform, for example adding a Wi-Fi co-processor to an existing product to add Wi-Fi connectivity.
IoT device manufacturers may choose to take this componentised approach if they do not plan to use all the features of the SoC or have other constrains such as power consumption, cost etc.
Given the size and complexity of this market and the number of different variations, this review has only taken a thin slice of the possible set of all IoT device components.
We have used a selection criterion that only includes SoC with both Wi-Fi and Bluetooth, and where the manufacturer provides prototyping or development boards.
These criteria limited the number of devices in the review and may skew the findings as we may have missed some interesting examples.
Acar et al. (2019) <cit.> have shown that developers often use question-and-answer sites to find solutions for security questions; the study showed that developers using the documentation have more secure implementations but take longer writing the feature.
Following from this work, we focused on the platform manufacturers' documentation and example code and did not review corresponding developer community sites. Platform manufacturers such as Arduino or Raspberry Pi focus more on the educational or hobbyist market and may also rely more on the user community creating content. For this paper we have not reviewed the community content.
We focused the review on three example security recommendations from the standards and regulations.
We did not look at issues such as data security and privacy or other hardware features such as flash encryption; this may limit the generalisation of this work.
§ RESULTS
Table <ref> contains the evaluation of IoT platforms.
The platform manufacturers take different approaches to helping IoT device manufacturers and their engineers; we categorise the approaches as:
* Cloud services deployment
* OS and security features support
* Data-sheets
* Limited security features
These approaches are discussed below.
§.§ Development support approaches
§.§.§ Cloud services deployment
In this category the platform manufacturers attempt to remove some of the security burden by providing a platform that includes: the SoC hardware, a device OS, security features and cloud infrastructure that manages the devices and provides features for the IoT device engineers. For example the Particle Platform-as-a-Service in Figure <ref>.
The platform manufacturers provide a root of trust via the cloud infrastructure, using a provisioning process to generate and securely store keys.
The IoT device engineers use the integrated development environment to build and sign the application firmware. The focus from the platform manufacturers is on their secure development process and providing a secure platform that device engineers can simply use.
Third party software vendors are also providing IoT cloud services for example, AWS IoT Core <cit.>. These services aim to allow any IoT device to use the services, with some platform manufacturers providing examples of how to integrate their device with the third party service.
For some IoT device engineers there are clear advantages for this approach and it can be useful for prototype development and small volume production.
Platform manufacturers such as Arduino and Particle have offerings in this space, but we can see other platform manufacturers moving in this direction.
Using these services introduces supply chain issues: consider, an IoT device collecting personal sensitive information. This information may be processed on the IoT cloud platform and stored using a cloud service provider, both actions may take place in different geo-political locations.
This can have both security and data protection ramifications that IoT device manufacturers will need to consider <cit.>.
The cost to a IoT device manufacturer could be significant for a large-scale deployment. IoT device manufacturers may also need to consider issues such as vendor lock-in or over reliance on a single cloud service provider.
§.§.§ OS and security features support
The platform provides the SoC, a real time OS, such as FreeRTOS <cit.> or zephyr <cit.> and a TEE.
The security functionality is implemented within the TEE using an environment such as TrustedFirmware-M <cit.>.
The aim is to provide an ecosystem that has all the parts that a IoT device engineer requires to build secure IoT devices.
This strategy relies on good documentation for the device engineers and access to example code that demonstrates the correct use of the security functionality.
All of the platform manufacturers have some technical detail normally delivered as data-sheets describing at least one of the security features.
The level of detail contained within the technical documentation can very significantly.
This documentation is intended to describe the functionality of the IoT platform. Additional material such as developer support and example code is needed to support a device engineer using the hardware functionality to deliver a security feature.
Figure <ref> has some examples taken from technical documentation.
These example show different levels of detail that can be found within the technical documentation, Figure <ref> has an example of an overview of the security functionality. More information or example code may be needed for device engineers to take advantage of these functionality. In contrast Figure <ref> is the beginning of a detailed explanation of the key management unit and how a device engineer can use the security functionality.
Four of the platform manufacturers provided developer support documentation and two had code examples for the secure boot process. Figure <ref> is an example of the documentation explaining how the hardware is used to achieve a security feature, while Figure <ref> is the start of the documentation for example code containing a security application template.
The results were similar for device identity keys: five platform manufacturers providing developer support documentation and four having code examples.
We found no developer support documentation or code examples for unique per device passwords.
We also found that some of the code examples used hard-coded credentials or defaulted to a non-secure option.
§.§.§ Data-sheets
Some platform manufacturers provide the hardware for secure development but leave it to the IoT device engineers to implement the real time OS and other features.
This provides the device engineers with the flexibility to implement what is needed for their specific requirements.
The device engineers are reliant on the technical documentation provided by the platform manufacturers, this is usually in a data-sheet.
These data-sheets do not normally provide developer guides or example code. Eight of the platform manufacturers had data-sheets that described the operation and configuration of the secure boot process. Five of them had data-sheets describing device identity keys, with nothing covering unique per device passwords.
§.§.§ Limited security features
Platform manufacturers such as Raspberry Pi may utilise existing development platforms for the IoT market. These existing platforms may not contain a TEE and so cannot directly support some of the security requirements for this review.
§.§ Secure boot
The secure boot process is supported by seven out of the nine platform manufacturers.
The documentation varied widely from one manufacturer having almost no detail, to three providing developer support documentation and example code. One of these also supplied a port of a code library to support the secure boot process. Three provided an introduction to the secure boot process and why it was important. Three manufacturers introduced a standard, in all cases this was PSA Certified <cit.> as the secure boot process is part of the level one platform certification. Platform manufacturers can use PSA Certified to demonstrate that the hardware meets an industry security certification. We did not see any reference to regulations such as the EU Cyber Resilience Act (EU-CRA) <cit.>.
§.§ Device identity key
Device identity keys are supported by seven of the platform manufacturers; again we see a wide difference in the documentation for this security feature and how this security feature is presented to device engineers.
§.§.§ Cloud services
Platform manufacturers that supply cloud services demonstrate the use of a device identity key as part of the cloud provisioning process.
For example, Particle implements the integration with the IoT device and the Particle cloud platform <cit.>.
Arduino delivers a simple sketch that is used to provision the IoT device with the Arduino cloud platform <cit.>.
Microchip generates a key using a PUF as part of the manufacturing process. They include multiple examples of the use of device identity keys with third party IoT cloud services such as AWS IoT Core <cit.> or Google cloud <cit.>.
§.§.§ Data sheets
Five platform manufacturers supply technical details of secure boot or device identity key, without providing additional developer support.
This technical documentation focuses on the hardware functionality, not how to implement services like provisioning or attestation. See Figure <ref> for an example of a data-sheet explaining the use of digital signatures using the private key to sign a message.
Figure <ref> also has the overview of the device attestation process;
the device engineer must bridge the gap from the functional definition of the hardware and software support provided by the IoT platform and the security feature to be implemented, in this case device attestation using a device identity key.
§.§ Unique per device password
Some of the platform manufacturers did provide more general security-related development guidelines or certification programs such as PSA Certified.
These certification programs provide a framework and checklist for the IoT device engineers.
If the IoT platform is also certified, that can then support the certification of the IoT device.
Implementing the unique per device password recommendations necessitates a number of requirements:
* A mechanism to generate a unique per device password, at manufacturing time, that can be stored in the IoT device firmware and affixed to the outside of the device.
* A mechanism to securely store the unique per device password. The password stored in a protected or tamper-proof zone, possibly in the TEE.
* A mechanism to override the device password with a user defined password. This also requires that user password is stored securely and should meet good password recommendations such as NIST SP 800-63B <cit.>.
* A mechanism to reset the device password back to the original unique per device password as part of the factory reset process.
Implementing these requirements entails understanding the IoT platform security functionality, the recommendations or regulations covering the IoT device and how the customer will use the device.
IoT device engineers may wish to take advantage of the IoT platform security functionality, such as the storage of sensitive information or the use of cryptographic functions to impalement some of the features above.
We did not find any evidence from the platform documentation for recommendations or support for providing unique per device passwords.
Some platform manufacturers do supply tools to run at manufacturing time to write keys or other data, these could be used as a template to write a unique per device password at manufacturing time.
Some of the platforms reviewed do present hardware support for storing sensitive data, but we found no discussion around how this could be used other than for key storage.
Naiakshina et al. (2019) <cit.> asked 42 freelance developers to store a password, 17 (40%) provide a secure solution.
Acar et al. (2017) <cit.> looked at the use of cryptographical library APIs. Having a simple interface was good, but they should also include ancillary functions to help the developers use the library.
We may conclude that if we want IoT device engineers to securely store and manage passwords, we should add these ancillary functions to existing cryptographic libraries that the TEE devices already provide.
§.§ Device engineer support
Eight of the nine platform manufacturers are providing hardware support for some basic IoT security tasks. Five platforms include a TEE, with three others having mechanisms to support security functionality such as the secure boot process or device identity keys. Platform manufacturers are also providing cryptographic libraries or hardware acceleration, and other features such as encrypted firmware. The security features provided do vary across manufacturers. Certification programs such as PSA Certified are driving some level of conformity but this standard may not be suitable for all IoT platform manufacturers.
The technical detail, developer support and code examples provided by different platform manufacturers varies significantly across manufacturers. Five of the nine platform manufacturers provide developer support or code examples.
Platform manufacturers focus on the hardware that they deliver and the security functionality of that hardware.
Platform manufacturers do not address the needs of device engineers that must implement regulatory requirements such as unique per device password.
§.§.§ Additional support provided
Platform manufacturers take different approaches to providing additional support for device engineers to take advantage of the security functionality that they deliver.
Three platform manufacturers provide cloud solutions that attempt to shield the device engineers from the implementation details of the security features.
These solutions offer features like secure boot, device identity, and provisioning within the operations of the cloud service. Two platform manufacturers provide additional functionality using libraries or APIs that makes use of the hardware. For example the TrustedFirmware-M <cit.> Initial Attestation Service Integration Guide is implemented on the Arm TrustZone <cit.>. The API delivers a set of security features that can be used by device engineers without the need to deal directly with the low level hardware. Some manufacturers take a componentised approach, providing discrete hardware components, that an IoT device manufacture would select to best meet their individual market or technical needs.
This additional support does not extend past the security primitives, to provide support for regulatory recruitment or IoT standards recommendations.
§ RELATED WORK
IoT security has been a research focus area for some time; Alqassem & Svetinovic (2014) <cit.> proposed a taxonomy for IoT security and privacy requirements.
As part of a systematic review, Mohanty et al. (2021) <cit.> categorised the IoT security challenges, focusing on IoT architecture and protocols.
Mishra & Pandya (2021) <cit.> reviewed the security challenges and solutions including IDS, while Mohanta et al. (2020) considered security challenges and solutions using machine learning and block chain technology.
Much of the research looks at the technical security challenges for IoT devices; Schiller et al. (2022) <cit.> take a wide view of the overall IoT security landscape, concluding that IoT security remains a concern given issues like limited resources or the need for fast time to market, though there are now devices on the market that can make the use of IoT devices more secure.
Pinto & Santos (2019) <cit.> reviewed the introduction of the Arm TrustZone <cit.> as a TEE within the IoT market. Ling et al. (2021) <cit.> looked at the use of TrustZone for IoT devices and demonstrated how it can be used for trusted boot and remote attestation.
Chowdhury et al. (2021) <cit.> considered the issues being faced by developers when writing security-related functionality; they defined a number of developer challenges, behaviors, and interventions as well as a set of tropes (something that is considered true, but is not).
These tropes include the notion that the developer is an expert.
They may be an expert in software engineering, but that does not necessarily make them an expert in security, cryptography or privacy.
Yskout et al. (2015) <cit.> showed that development teams did not seem to perform any better when following security design patterns and that both teams suffered from a lack of detail, for example needing to manage keys when writing encrypted storage features, suggesting the need for supporting features or ancillary code.
When using privacy by design Senarath & Arachchilage (2018) <cit.> noted that developers struggled to map privacy requirements to engineering practice.
Naiakshina et al. (2019) <cit.> asked freelance developers to store a password; even when prompted to do this securely, 38% failed to provide a secure solution.
Hallett et al. (2021) went on to show that when the participants were asked to write a specification for the function to securely store a password before writing the code, only 3% (two participants) followed the current best practice guidelines.
Acar et al. (2017) <cit.> concluded that for cryptography libraries to reduce developer errors, it was not sufficient for the library to be simple to use.
The libraries also needed to include ancillary supporting functions, documentation and example code.
Acar et al. (2016) <cit.> demonstrated that where developers get their security solutions from can affect the security of the resulting code.
Using sources such as StackOverflow can result in less secure solutions.
§ DISCUSSION
The regulatory and standards landscape for IoT device engineers is complicated.
Streamlining these standards is made difficult given the breadth of application areas that IoT devices are deployed in, resulting in multiple standard bodies and differing requirements.
Organizations such as PSA Certified <cit.> provide a level of guidance to IoT device manufacturers and a certification checklist to demonstrate that the IoT device and development process meets a minimal standard.
Platform manufacturers are now including a TEE within the IoT device.
These TEEs deliver a set of security primitives such as enabling a secure boot root of trust process, secure key and certificate management, and cryptographic functions.
Platform manufacturers are taking a number of different paths to address the needs of IoT device manufacturers:
* Delivering cloud-hosted services that integrate with the IoT platform providing device management, provisioning and attestation.
* Integration with third-party IoT cloud services.
* Integrating with existing libraries for security functionality, such as Mcuboot <cit.> or TrustedFirmware-M <cit.>.
* Providing development tools for signing images or writing keys or certificates into the IoT device.
There remains a significant gap between the hardware security features provided by a TEE and the functionality needed to follow the standards recommendations or to meet the requirements of the regulations.
Beyond the use of these security primitives, there is even less guidance for the IoT device engineers to implement regulatory requirements such as password management.
§.§ Standards landscape
The regulatory landscape for IoT device engineers is complex, with potentially multiple different regulations to consider, depending on the market, location of deployment, and the data being collected.
Regulators are addressing the security concerns around IoT devices by laying down a set of legal requirements for IoT device engineers to follow (Figure <ref>).
It is the responsibility of the IoT device engineer to convert these regulatory requirements into solutions that can be deployed in IoT devices within a given set of constraints including cost, power, and size.
Mapping the regulatory requirements to existing standards or recommendations is also challenging.
The ENISA Cyber Resilience Act Requirements Standards Mapping report <cit.>, attempts to do this mapping for the EU Cyber Resilience Act (EU-CRA) <cit.>.
They conclude that there is a need for a single unified set of IoT recommendations, and that the mapping of regulatory requirements to existing standards either results in gaps or the need to refer to multiple standards.
Looking at one example:
“(3) On the basis of the risk assessment referred to in Article 10(2) and where applicable, products with digital elements shall:
(a) be delivered with a secure by default configuration, including the possibility to reset the product to its original state;” <cit.>
ENISA defines sub-requirements <cit.>:
* In case default configurations foresee an initial/default credential, the same should use a complex and randomly chosen password, different for each product
* In case default configurations cover cybersecurity items, they should adopt a reasonable level of security for each item
* The default configuration should be placed in a non-erasable memory
* A function to reset the product configuration to the default one should be implemented
The sub-requirements are mapped to other existing standards:
* ISO/IEC 27002 Information security, cybersecurity and privacy protection — Information security controls <cit.>
* ETSI EN 303 645 Cyber Security for Consumer Internet of Things: Baseline Requirements <cit.>
* ISO/IEC 18031:2011 Information technology — Security techniques — Random bit generation <cit.>.
From the example above we can trace the requirement for a unique per device password, from the regulation via standard recommendations. Multiple standards would need to be consulted to implement this security feature.
Platform manufacturers provide no support for implementing this security feature.
The PSA <cit.> is an initiative lead by Arm to define a security architecture for IoT devices.
The security model provides a set of industry best practices for IoT platforms, defining a set of minimum security features that an IoT device should conform to.
The PSA Certified program <cit.> enables IoT device manufacturers to certify their product as following this standard.
Three of the platform manufacturers reviewed are part of the PSA Certified program <cit.>.
The PSA Certified level 1 questionnaire <cit.> also includes additional requirements to map the certification to the EU Cyber Resiliency Act (EU-CRA) <cit.>.
Additionally, the level 1 requirements are also mapped to ETSI EN 303 645 <cit.>, this can be seen in Figure <ref>.
This again demonstrates the complexity of mapping regulations to standards and certifications.
This makes it difficult for both consumers and manufacturers to determine if a given regulation or standard is achieved by an IoT device.
§.§ Effect of regulation
When reviewing the security functionality for these IoT platforms, we see that the platform manufacturers are providing the security primitives: TEE, root of trust, device identity keys, and cryptography functions.
The regulations take a broad view, looking at the security needs for the life cycle of the IoT device, including the development process, user data, vulnerability disclosure, and security updates, etc.
These regulations define a set of processes that should be followed by the IoT device manufacturing organisations, as well as security features that will need to be implemented on the IoT device by device engineers.
Regulations may also place additional requirements on the IoT hardware, and increase the need for more processing or storage.
For example, the need to log security events (such as a failed login attempt), or to interact with authorisation protocols.
The implementation of a regulatory requirements may need the use of these security primitives. For example, encrypting data at rest or storing passwords in the TEE.
We have seen from the example of unique per device passwords that minimal support is provided above the use of the security primitives to deliver these regulatory requirements.
There is a gap between the security provision from the platform manufacturers and the regulatory requirements.
Device engineers will need to implement these regulatory requirements themselves with the possibility that this will introduce other vulnerabilities.
§.§.§ Cloud deployment
The IoT cloud deployments have the ability to remove much of the security detail from device engineers.
These platforms provide security services for root of trust, device identity, provisioning, and encrypted communication, as well as application features such as storage, message queues, and events.
We have seen from the review that the cloud services are delivering these security primitives.
There remains a gap between these security primitives and the regulatory requirements.
The cloud service providers have an opportunity to extend their offering to add additional regulatory functionality.
§.§ Recommendations
§.§.§ Standards harmonisation
Harmonising the current complexity of existing standards and recommendations would reduce confusion and misunderstanding.
Standard harmonisation could provide a single set of functional requirements that IoT manufacturers could address, and encourage a unified certification program.
Harmonisation could also minimise the mapping from government regulations to multiple different IoT standards.
Ultimately this may be beneficial to platform manufacturers, IoT device manufacturers, and to the consumer, potentially providing a single certificating mark that a consumer would recognize as being a certified secure device.
ETSI EN 303 645 <cit.> has been identified as a possible starting point for a single harmonised standard across the EU <cit.>.
The PSA Certified program <cit.> is attempting to take multiple standards and regulations into account.
Continuing this work could make progress towards a harmonised standard, but work would be needed from multiple other regulators and standard bodies.
§.§.§ Secure example code
We find that developer support remains inconsistent across different platform manufacturers, with minimal step-by-step guides and code examples to implement a security feature.
We would encourage platform manufacturers to include more code examples.
These code examples should follow security best practices, and should not default to low or non-secure implementations.
Platform manufacturers could also include code examples to implement specific standards recommendations.
For example, platform manufacturers that are PSA Certified could also include code examples for PSA Certified requirements, such as ID storage, password best practices, or security configuration.
This would aid device engineers in achieving PSA certification for their IoT device. It would also encourage the device engineers to follow the platform manufacturers' security coding practices and potentially provide a library of standards-based security functionality.
This would make it easier for device engineers to gain certification, which is also in the best interests of the platform manufacturers, as this will help to drive the selection of the IoT platform.
§.§.§ Software, and auxiliary code
Platform manufacturers are already taking advantage of third-party or open source software such as FreeRTOS <cit.> or TrustedFirmware-M <cit.> to provide functionality for IoT devices.
This model could be extended to deliver functionality for regulatory requirements, for example by providing a library for password management and storage.
The IoT platform manufacturer would be responsible for porting a set of libraries to their platform and keeping them up to date.
While this is additional work for the platform manufacturer, there is an advantage; this would reduce the development cost of a new IoT device, making the IoT platform commercially more attractive, potentially increasing thier market share.
From a security perspective, if these libraries are open to security scrutiny or certification, they may reduce the opportunity for additional vulnerabilities when implementing the regulatory requirements, and so have a positive impact of IoT device security.
§.§ Future work
Further research is needed to map the existing IoT platform security features to the relevant regulations, to highlight the remaining areas that will need to be addressed.
This could also include a mapping of regulations to any existing software solutions. These existing solutions could be adopted by platform manufacturers to support device engineers when implementing specific regulations.
This paper has only looked at a small selection of security features from the standards. Further research could look to extend this beyond the baseline security recommendations and also consider how device engineers are supported when needing to implement privacy features.
Research has already looked at the usability of cryptographic APIs <cit.>, this research could be extended to the usability of the cryptographic APIs provided by platform manufacturers and TEEs.
A further study could be conducted with device engineers to understand how they translate the technical documentation found in data sheets to security regulation requirements. Potential research questions could be:
* What issues do device engineers find when translating technical data sheets to security features?
* Where do device engineers go to get examples of good security practices?
* What could be done to make this translation easier?
Researchers have previously looked at the software development process and testing frameworks. This research could be extended to consider the following questions:
* How well does IoT device development integrate into existing software development processes?
* Do existing testing frameworks extend to IoT development?
§ CONCLUSIONS
We find that platform manufacturers are providing features for device engineers to implement basic IoT security features. These security primitives include: TEE, the secure boot process, device identity keys, and cryptography functions (RQ1).
The platform manufacturers do not go beyond the security primitives to support device engineers to deliver regulatory requirements such as unique per device passwords.
The level of detail found in the documentation for these features varies depending on the manufacturer. Most manufacturers focus on the hardware functionality, and not how a device engineer can use the functionality to deliver a specific security feature.
Six of the nine IoT platforms reviewed provided some additional support to help device engineers to use these security features correctly (RQ2). These manufacturers provided developer documentation, step-by-step guides or example code.
Some of the sample code included hard coded credentials or defaulted to a less or non-secure implementation.
Three of the platform manufacturers have a PSA Certification <cit.> for the device we reviewed.
We find no evidence for discussions of IoT standards such as ETSI EN 303 645 <cit.> or regulations such as the EU Cyber Resiliency Act (EU-CRA) <cit.> within the platform manufacturers' technical documentation or developer support material.
The platform manufacturers take different approaches to support device engineers to take advantage of basic security features (RQ3):
* Cloud services deployment
* OS and security features support
* Data-sheets
* Limited security features
We find that even when the hardware supports a security feature, there is limited guidance for IoT device engineers. Device engineers are required either to use the cloud implementation or become a security expert.
We find a complex standards and regulatory landscape for platform manufacturers and IoT device engineers to navigate.
Platform manufacturers provide no support for other regulatory requirements, such as unique per device passwords.
We conclude that the platform manufacturers, regulators and standards bodies will need to do considerably more to support IoT device engineers if we wish to improve the security outcomes of IoT devices.
acm
|
http://arxiv.org/abs/2409.03092v1 | 20240904214205 | Resilient Two-Time-Scale Local Stochastic Gradient Descent for Byzantine Federated Learning | [
"Amit Dutta",
"Thinh T. Doan"
] | math.OC | [
"math.OC"
] |
Resilient Two-Time-Scale Local Stochastic Gradient Descent for Byzantine Federated Learning
Amit Dutta and Thinh T. DoanAmit Dutta is with the Electrical and Computer Engineering Department at Virgnia Tech, email: [email protected]. Thinh T. Doan is with the Aerospace Engineering and Engineering Mechanics Department at University of Texas, Austin, email: [email protected]. This work was partially supported by NSF-CAREER Grant No. 2339509 and AFOSR YIP Grant No. 420525
September 9, 2024
===================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
We study local stochastic gradient descent methods for solving federated optimization over a network of agents communicating indirectly through a centralized coordinator. We are interested in the Byzantine setting where there is a subset of f malicious agents that could observe the entire network and send arbitrary values to the coordinator to disrupt the performance of other non-faulty agents. The objective of the non-faulty agents is to collaboratively compute the optimizer of their respective local functions under the presence of Byzantine agents. In this setting, prior works show that the local stochastic gradient descent method can only return an approximate of the desired solutions due to the impacts of Byzantine agents. Whether this method can find an exact solution remains an open question. In this paper, we will address this open question by proposing a new variant of the local stochastic gradient descent method. Under similar conditions that are considered in the existing works, we will show that the proposed method converges exactly to the desired solutions. We will provide theoretical results to characterize the convergence properties of our method, in particular, the proposed method convergences at an optimal rate 𝒪(1/k) in both strongly convex and non-convex settings, where k is the number of iterations. Finally, we will present a number of simulations to illustrate our theoretical results.
Federated optimization, Byzantine fault-tolerance, two-time-scale methods.
§ INTRODUCTION
We consider a distributed optimization framework where there are N agents communicating with a single coordinator. This framework is also popularly known as federated optimization <cit.>. Associated with each agent i is a function q^i:ℝ^d→ℝ. The goal of the agents is to find a point x^⋆ that optimizes their aggregate local functions.
Besides traditional machine learning applications<cit.>, federated optimization now also finds application in networked systems e.g., internet of vehicles <cit.>, industrial control systems <cit.>, and wireless <cit.>.
One of the key advantages of the federated optimization framework is its ability to implement optimization algorithm updates locally at the agents without necessitating the transmission of raw data to a centralized coordinator.
This localized data processing not only reduces the communication overhead between agents and the central server but also introduces an element of privacy preservation.
One of the main challenges in federated learning is the vulnerability of the system to malicious attacks where some agents in the network may fail or whose updates can be manipulated by an external entity. Such malicious agents will have detrimental impacts to the performance of other agents, and if not addressed, it can lead to catastrophic failures of entire network. For example, malicious attacks have been identified as the most critical problem in wireless spectrum sensing <cit.>.
In this paper, we are interested in studying the so-called distributed local stochastic gradient descent (SGD) for solving federated optimization. Our focus is to characterize the performance of this method when there are a (small) number of Byzantine malicious agents in the network . In this setting, Byzantine agents can observe the entire network and send any information to the centralized coordinator to corrupt the output of local SGD. Due to the impact of Byzantine agents, our goal now is to solve the optimization problem that only involves the honest agents. In particular, we consider the setting where there are up to f faulty Byzantine agents with unknown identities. We then seek to solve an exact fault-tolerance problem defined as follows.
Exact fault-tolerance problem:
Let ℋ be the set of honest agents with |ℋ| ≥ N-f, then an algorithm is said to have exact fault-tolerance if it allows all the non-faulty agents to compute
x_ℋ^⋆∈min_x∈ℝ^d∑_i ∈ℋq^i(x).
Note that if the number of Byzantine agent is large, i.e., f> ℋ, it is impossible to solve problem (<ref>). We, therefore, consider the following 2f-redundancy condition, which is necessary and sufficient for solving problem (<ref>) exactly <cit.>.
The set ℋ, with |ℋ| ≥ N-f, is said to have 2f-redundancy if for any subset 𝒮⊂ℋ with |𝒮| ≥ N -2f
min_x∈ℝ^d∑_i ∈𝒮q^i(x) = min_x∈ℝ^d∑_i ∈ℋq^i(x).
We note that the 2f-redundancy condition arises naturally in many problems, including hypothesis testing <cit.>, and distributed learning <cit.>. For example, in distributed learning this condition is satisfied when all agents have identical objective functions, the so-called homogeneous setting.
In this paper, we will investigate the convergence of local SGD in solving problem (<ref>) under this 2f-redundancy condition. In <cit.>, the authors show that the local deterministic gradient descent can find x_ℋ^⋆ when using a comparative elimination (CE) filter in its update to address the impacts of Byzantine agents. However, their approach cannot be extended to the case of local SGD, i.e., each agent only has access to stochastic samples of ∇ q^i(·). In this stochastic setting, the work in <cit.> can only return a point within a ball around x_ℋ^⋆ whose the size depends on the ratio f/ℋ. The level idea behind this issue is that the CE filter cannot simultaneously address the impacts of Byzantine agents and local stochastic errors due to gradient sampling. Our focus is therefore to address this open question. Specifically, we will propose a new variant of local SGD that will allow each agent to find exactly x_ℋ^⋆ in both strongly convex and non-convex settings. Our main contribution is summarized as follows.
Main contribution. We propose a new two-time-scale variant of local SGD for solving problem (<ref>) in the Byzantine setting under the 2f-redundancy condition. We will show that the proposed algorithm can return an exact solution x_ℋ^⋆ of problem (<ref>). In addition, we will study theoretical results to characterize the convergence rate of our algorithms when the underlying objective function satisfies either strong convexity or non-convex Polyak-Łojasiewicz (PŁ) condition. In both cases, our algorithm converges to the optimal solutions at an optimal rate 𝒪(1/k), where k is the number of iteration. Finally, we will provide a few numerical simulations to illustrate the correctness of our theoretical results.
§.§ Related work
According to existing literature, there are various Byzantine fault-tolerant aggregation schemes for distributed optimization and learning. These include multi-KRUM <cit.>, coordinate-wise trimmed mean (CWTM) <cit.>, geometric median-of-means(GMoM) <cit.>, minimum-diameter averaging (MDA) <cit.>, and Byzantine-robust stochastic aggregation (RSA) <cit.> filters. However, it is important to note that these schemes do not guarantee exact fault-tolerance even in a deterministic setting with 2f-redundancy, unless additional assumptions are made regarding the objectives of the honest agents. The work by <cit.> shows that it is possible to achieve exact fault tolerance in deterministic setting and approximate fault tolerance in stochastic setting in 2f-redundancy scenario. <cit.> proposed RESilient Averaging of Momentum (RESAM) which presents an unified byzantine fault-tolerant framework with accelerated gradient descent based on the previously mentioned methods. They also established finite time convergence with some additional assumptions. Although their results hold for non-convex objectives, they clearly proved that such generalization and acceleration cannot be applied for the CE aggregation scheme. Recently, <cit.> explored the impact of Byzantine agents and stragglers, where stragglers are agents that experience significant delays in their updates, on solving distributed optimization problems under the redundancy of cost functions. Furthermore, the authors examined Byzantine fault-tolerant min-max distributed optimization problems under similar redundancy conditions in <cit.>. Both works demonstrate that the authors' approach can only achieve approximate solutions. We also want to note some relevant work in <cit.>, where the authors study approximate fault tolerance problem with more relaxed conditions on the Byzantine agents. Our work in this paper builds on the works of <cit.> where based on the 2f-redundancy condition we propose a two-time-scale variant of the local SGD in both strongly convex non-convex setting. To address the affect of approximate convergence to optimal solution in byzantine free case minibatch SGD has been studied in <cit.>, <cit.> in order to reduce the dependency on the variance of the stochastic gradients. Here the authors have further proposed an accelerated version of the mini-batch SGD to further reduce the impact of gradient noise in both IID and non-IID sampling cases. However no such improvements have been addressed in literature when a given network is under attack from byzantine agents.
Another relevant literature to this paper is the recent works in studying the complexity of two-time-scale stochastic approximation, see for example <cit.>. It has been observed that two-time-scale approach can be used to either study or design better distributed algorithms in different settings, e.g., delays <cit.>, quantization <cit.>, and cluster networks <cit.>. In this paper, we will leverage this idea to design a new resilient local SGD for the Byzantine setting.
§ RESILIENT TWO-TIME-SCALE LOCAL SGD
In this section, we present the proposed algorithm, namely, resilient two-time scale local SGD, for solving problem (<ref>) when there are up to f Byzantine agents. Our algorithm is formally stated in Algorithm <ref>, where each honest agent i∈ℋ maintains two local variables x^i,y^i to estimate the optimal value x_ℋ^⋆ and its local gradient ∇ q^i(·), respectively. On the other hand, the server maintains a global variable x̅ to estimate the average of the iterates sent by the agents. At any global iteration k, each agent i∈ℋ implements 𝒯 two-time-scale SGD steps to update its variables (a.k.a Eqs. (<ref>) and (<ref>)), where ∇ q^i(·;Δ^i) is the sample of ∇ q^i(·). These two updates are implemented by using two different step sizes α_k≥β_k, i.e., the update of y^i is implemented at a “faster" time scale than x^i, explaining for the name of two-time-scale SGD. In particular, each agent first estimates its local gradient from the samples, which is then used to update its local variable toward the optimal solution x_ℋ^⋆. After 𝒯 steps, each honest agent will send its last iterate x_k,𝒯^i to the server. We note that Byzantine agents can send any arbitrarily to the server. To address this issue, the server will implement a comparative elimination (CE) filter, studied in <cit.>. In particular, the server first sorts the distance between the agent estimates and its average in an ascending order (Step 8). The server then eliminates the f largest distances (Step 9), i.e., since it does not know the identity of Byzantine agents the server can only eliminate any “suspiciously large values". The server then computes a new average based on the estimates of remaining N-f agents in Eq. (<ref>).
In Eq. (<ref>), when α_k = 1 the proposed algorithm is reduced to the local SGD with CE filter studied in <cit.>. However, as we will show in this paper, by properly choosing α_k and β_k our algorithm will guarantee an exact convergence to x_ℋ^⋆ even under the Byzantine setting, which cannot be achieved by the one in <cit.>. In particular, we choose the step sizes to satisfy β_k≤α_k≤ 1 as follows
α_k = C_α/1+h+k, β_k = C_β/1+h+k,
where C_β≤ C_α s.t. α_k≤ 1 and L𝒯β_k≤ 1, ∀ k≥0.
We will demonstrate in Theorems <ref> and <ref> that, with an appropriate selection of the parameter h > 1, this choice of step sizes is essential for the algorithm to achieve an optimal convergence rate of 𝒪(1/k).
§ TECHNICAL ASSUMPTIONS AND PRELIMINARIES
We present here the main technical assumptions and some preliminaries, which will help to facilitate the development of our main results later. First, we consider the following two main assumptions, where 𝒫_k,t is the filtration that includes all the random variables generated by Algorithm <ref> up to time k+t.
The random variables Δ^i_k, ∀ i and k≥ 0, are i.i.d. and there exists a positive constant σ such that we have ∀ x ∈ℝ^d
𝔼[∇ q^i(x,Δ^i_k,t)|𝒫_k,t] = ∇ q^i(x),
𝔼[∇ q^i(x,Δ^i_k,t)-∇ q^i(x)^2|𝒫_k,t] ≤σ^2.
For each i∈ℋ, q^i has Lispchitz continuous gradients, i.e., there exists a constant L>0 such that
∇ q^i(y) - ∇ q^i(x)≤ L y-x, ∀ x,y ∈ℝ^d.
In the sequel, we will assume that these two assumptions always hold. For notational convenience, we define
q_ℋ(x) = 1/|ℋ|∑_i ∈ℋq^i(x).
Let e_k,t^i be the local gradient estimate error at client i defined as
e_k,t^i = y_k,t^i - ∇ q^i (x̅_k),
and W_k be the average of local gradient estimate error defined as
W_k = 1/|ℋ|∑_i ∈ℋe^i_k,0^2.
We denote by ℬ_k the set of potential Byzantine clients and ℋ_k the set of non-faulty clients in ℱ_k in step 9 in Algorithm <ref>. Finally, we denote by 𝒳^⋆_ℋ and 𝒳^⋆_i the sets of minimizers of q_ℋ and q^i, respectively. Then, under Assumption <ref> and the 2f-redundancy property, one can show that <cit.>
⋂_i∈ℋ𝒳^⋆_i = 𝒳^⋆_ℋ.
Using the notation above, we rewrite (<ref>) as follows: ∀ i∈ℋ
x^i_k,𝒯 = x_k -β_k∑_t=0^𝒯-1e^i_k,t -𝒯β_k∇ q^i(x̅_k),
which since |ℱ_k| = |ℋ| and |ℬ_k| = |ℋ\ℋ_k| yields
x_k+1 =1/|ℱ_k|∑_i ∈ℱ_kx^i_k,𝒯
=1/|ℋ|[∑_i∈ℋx^i_k,𝒯 + ∑_i∈ℬ_kx^i_k,𝒯 - ∑_i∈ℋ\ℋ_kx^i_k,𝒯],
= x̅_k - 𝒯β_k∇ q_ℋ(x̅_k) - β_k/|ℋ|∑_i ∈ℋ∑_t=0^𝒯-1e^i_k,t
+ 1/|ℋ|[∑_i∈ℬ_k(x^i_k,𝒯 -x_k) - ∑_i∈ℋ\ℋ_k(x^i_k,𝒯-x_k)].
We next consider the following result on W_k, where for an ease of exposition its proof will be presented in the Appendix.
For all k≥ 0 we have
𝔼[W_k+1]
≤(1-α_k/2 + 126(L+1)^2𝒯^2β_k)𝔼[W_k]
+(14L^3𝒯^2α_kβ_k + 300L^4𝒯^2β^2_k/α_k)𝔼[x̅_k-x^⋆_ℋ^2]
+ 2σ^2𝒯α^2_k + 96L^2𝒯^2σ^2β^2_k
+ 6L^2𝒯^2σ^2α^2_kβ^2_k + 96L^2𝒯^2σ^2fα_kβ^2_k/|ℋ|·
§ MAIN RESULTS
In this section, we present the main results of this paper, where we will study the convergence properties of Algorithm <ref> in two settings, namely, strong convexity and PL conditions.
§.§ Strongly convex condition
We consider the following assumption on q_ℋ(x).
The objective function q_ℋ(.) is strongly convex, i.e., there exists a constant μ∈ (0,L] s.t.
(y-x)^T(∇ q_ℋ(y) - ∇ q_ℋ(x)) ≥μy-x^2, ∀ x,y ∈ℝ^d.
Assumption <ref> guarantees a unique solution x_ℋ^⋆ for problem (<ref>). However, it does not require each local function q^i to be strongly convex, meaning each q^i may have multiple minimizers. Note that under the 2f-redundancy condition, we have x_ℋ^⋆ lies in the intersection of the minimizer sets 𝒳_i^⋆ of q^i.
Our main result in this section is based on the following lemma, whose proof is presented in the Appendix.
For all k ≥ 0 we have
𝔼[x̅_k+1-x^⋆_ℋ^2]
≤(1-35μ𝒯β_k/18 + 17L𝒯β_k|ℬ_k|/3|ℋ|+ 103L^2𝒯^2β^2_k)𝔼[x̅_k-x^⋆_ℋ^2]
+ 72L^2𝒯α_kβ_k/μ𝔼[x̅_k-x^⋆_ℋ^2] + 78𝒯β_k/μ𝔼[W_k]
+ 4𝒯^2σ^2α_kβ^2_k +
48𝒯σ^2α^2_kβ_k/μ + 32𝒯^2σ^2fα^2_kβ^2_k/|ℋ|·
To study the convergence of Algorithm <ref> under Assumption <ref>, we consider the following Lyapunov function V_k
V_k = x̅_k - x^⋆_ℋ^2 + W_k.
Let {x^i_k} and {y^i_k} be generated by Algorithm <ref> for 𝒯>1.
Suppose the step sizes α_k and β_k in (<ref>) satisfying
α_k≤μ/(8)^4(L+1)^4𝒯, β_k≤μ/(12)^4L^2𝒯,
β_k/α_k≤μ/(14)^4(L+1)^4𝒯·
Then we have
𝔼[V_k+1]
≤(1-23μ𝒯β_k/12 + 17L𝒯β_k|ℬ_k|/3|ℋ|)𝔼[V_k]
+ 150(L+1)^3𝒯^2σ^2α^2_k/μ + 128(L+1)^2𝒯^2σ^2fα^2_k/|ℋ|·
In addition, if the following condition holds
|ℬ_k|/|ℋ| = f/N-f≤μ/3L,
and C_α, C_β and h are chosen as
C_α≥(84)^4(L+1)^4/6μ^2, C_β = 72/μ𝒯
h ≥max{(8)^4(L+1)^4𝒯C_α/μ; (72)^4L^2/18μ^2},
then we obtain the following.
𝔼[V_k+1] ≤h^2𝔼[V_0]/(1+h+k)^2 + 150(L+1)^3𝒯^2σ^2C^2_α/μ(1+h+k)
+ 128(L+1)^2𝒯^2σ^2fC^2_α/(1+h+k)|ℋ|·
Our result in (<ref>) implies that the sequence {x̅_k} generated by Algorithm <ref> converges to x^⋆_ℋ in the mean-square sense at a rate 𝒪(1/k), which is the same rate as in the Byzantine-free setting. In addition, our convergence complexity bound depends on the ratio f/ℋ, which is similar to the result in <cit.>.
By adding (<ref>) into (<ref>) we have
𝔼[V_k+1]
≤(1-35μ𝒯β_k/18 + 17L𝒯β_k|ℬ_k|/3|ℋ| + 103L^2𝒯^2β^2_k)𝔼[x̅_k-x^⋆_ℋ^2]
+ (300L^4𝒯^2β^2_k/α_k + 72L^2𝒯α_kβ_k/μ)𝔼[x̅_k-x^⋆_ℋ^2]
+ 14L^3𝒯^2α_kβ_k𝔼[x̅_k-x^⋆_ℋ^2]
+(1-α_k/2 + 126(L+1)^2𝒯^2β_k + 78𝒯β_k/μ)𝔼[W_k]
+ 2𝒯σ^2α^2_k + 4𝒯^2σ^2α_kβ^2_k + 42𝒯^2σ^2α_kβ_k/μ
+ 6L^2𝒯^2σ^2α^2_kβ^2_k + 96L^2𝒯^2σ^2β^2_k
+ 32𝒯^2σ^2fα^2_kβ^2_k/|ℋ| + 96L^2𝒯^2σ^2fα_kβ^2_k/|ℋ|
≤(1-35μ𝒯β_k/18 + 17L𝒯β_k|ℬ_k|/3|ℋ| + 103L^2𝒯^2β^2_k)𝔼[x̅_k-x^⋆_ℋ^2]
+ (300L^4𝒯^2β^2_k/α_k + 72L^2𝒯α_kβ_k/μ)𝔼[x̅_k-x^⋆_ℋ^2]
+ 14L^3𝒯^2α_kβ_k𝔼[x̅_k-x^⋆_ℋ^2]
+(1-α_k/2 + 126(L+1)^2𝒯^2β_k + 78𝒯β_k/μ)𝔼[W_k]
+ 150(L+1)^3𝒯^2σ^2α^2_k/μ + 128(L+1)^2𝒯^2σ^2fα^2_k/|ℋ|
≤(1-69μ𝒯β_k/36 + 17L𝒯β_k|ℬ_k|/3|ℋ|)𝔼[V_k]
+ (-μ𝒯β_k/36 + 103L^2𝒯^2β^2_k + 14L^3𝒯^2α_kβ_k)𝔼[x̅_k - x^⋆_ℋ^2]
+(72L^2𝒯α_kβ_k/μ + 300L^4𝒯^2β^2_k/α_k)𝔼[x̅_k - x^⋆_ℋ^2]
+(-α_k/2 + 69μ𝒯β_k/36 + 126(L+1)^2𝒯β_k + 78𝒯β_k/μ)𝔼[W_k]
+ 150(L+1)^3𝒯^2σ^2α^2_k/μ + 128(L+1)^2𝒯^2σ^2fα^2_k/|ℋ|,
where the second inequality is obtained using β_k≤α_k≤ 1 and μ≤ L.
Using μ≤ L we express the above inequality as
𝔼[V_k+1]
≤(1-69μ𝒯β_k/36 + 17L𝒯β_k|ℬ_k|/3|ℋ|)𝔼[V_k]
+(-μ𝒯β_k/36 +103L^2𝒯^2β^2_k)𝔼[x̅_k-x^⋆_k^2]
+ (86(L+1)^4𝒯^2α_kβ_k/μ + 300(L+1)^4𝒯β_k/μ)𝔼[x̅_k-x^⋆_ℋ^2]
+(-α_k/2 +206(L+1)^4𝒯β_k/μ)𝔼[W_k]
+ 150(L+1)^3𝒯^2σ^2α^2_k/μ + 128(L+1)^2𝒯^2σ^2fα^2_k/|ℋ|
≤(1-69μ𝒯β_k/36 + 17L𝒯β_k|ℬ_k|/3|ℋ|)𝔼[V_k]
+ 150(L+1)^3𝒯^2σ^2α^2_k/μ + 128(L+1)^2𝒯^2σ^2fα^2_k/|ℋ|,
where the last inequality we use (<ref>) to have
0 ≤μ𝒯β_k/36 -103L^2𝒯^2β^2_k - 86(L+1)^4𝒯^2α_kβ_k/μ
-300(L+1)^4𝒯^2β^2_k/α_k,
0≤α_k/2 - 206(L+1)^4𝒯β_k/μ·
Next, to show (<ref>) we observe that the conditions (<ref>) satisfy those in (<ref>). Thus, we have
𝔼[V_k+1]
≤(1-𝒯β_k(23μ/12 - 17L|ℬ_k|/3|ℋ|))𝔼[V_k]
+ 150(L+1)^3𝒯^2σ^2α^2_k/μ + 128(L+1)^2𝒯^2σ^2fα^2_k/|ℋ|.
≤(1-μ𝒯β_k/36)𝔼[V_k]
+ 150(L+1)^3𝒯^2σ^2α^2_k/μ + 128(L+1)^2𝒯^2σ^2fα^2_k/|ℋ|,
where the last inequality is due to (<ref>)
23μ/12 - 17L|ℬ_k|/3|ℋ|≥μ/36·
Using β_k = 72/μ𝒯(1+h+k) we obtain from above
𝔼[V_k+1] ≤(1-2/1+h+k)𝔼[V_k]
+ 150(L+1)^3𝒯^2σ^2C^2_α/μ(1+h+k)^2 + 128(L+1)^2𝒯^2σ^2fC^2_α/(1+h+k)^2|ℋ|,
which by multiplying both sides by (1+h+k)^2 gives
(1+h+k)^2𝔼[V_k+1]
≤ (h+k)^2𝔼[V_k]
+ 150(L+1)^3𝒯^2σ^2C^2_α/μ + 128(L+1)^2𝒯^2σ^2fC^2_α/|ℋ|
≤ h^2𝔼[V_0]+ 150(L+1)^3𝒯^2σ^2C^2_α(k+1)/μ
+ 128(L+1)^2𝒯^2σ^2fC^2_α(k+1)/|ℋ|·
By dividing both sides of the above inequality by (1+h+k)^2 we immediately obtain (<ref>), which concludes our proof.
§.§ Non-convex satisfying PL condition
In this section, we will present the results for the case where q_ℋ(x) satisfy the so-called PŁ condition presented below.
There exists a constant μ>0 s.t.
1/2∇ q_ℋ(x_k)^2≥μ (q_ℋ(x_k) - q_ℋ(x^⋆_ℋ)) ≥μ^2/2x_k - x^⋆_ℋ^2.
Next we consider following lemma, where we present its proof in the Appendix.
We have for all k≥ 0
𝔼[ q_ℋ(x_k+1) - q_ℋ(x_k)]
≤(-5𝒯β_k/6 + 2L𝒯β_k|ℬ_k|/|ℋ| + 110L^3𝒯^2β^2_k/μ^2)𝔼[∇ q_ℋ(x̅_k)^2]
+50𝒯β_k𝔼[W_k] + 30𝒯σ^2α^2_k + 16L𝒯^2σ^2α_kβ^2_k + 16𝒯^2σ^2fα^2_k/|ℋ|·
For our result, we consider the following Lyapunov function
V_k = (q_ℋ(x̅_k)-q_ℋ(x^⋆_ℋ)) + W_k.
Let Assumption <ref> hold. Let α_k and β_k be given in (<ref>) and satisfy
α_k ≤μ^2/(6)^4(L+1)^3𝒯, β_k≤μ^2/(12)^4L^3𝒯,
β_k/α_k≤μ^2/(12)^4(L+1)^4𝒯^2·
Then for all k ≥ 0 we have
𝔼[V_k+1]
≤(1 -9μ𝒯β_k/6 + 4L𝒯β_k|ℬ_k|/|ℋ|)𝔼[V_k]
+150(L+1)^2𝒯^2σ^2α^2_k + 112(L+1)^2𝒯^2σ^2fα_k^2/|ℋ|·
Further, let C_α, C_β and h satisfy
C_α≥(12)^5(L+1)^4𝒯/μ^3, C_β = 12/μ𝒯
h ≥max{(6)^4(L+1)^3𝒯C_α/μ^2; (12)^5L^3/μ^3},
and the condition (<ref>) hold. Then we obtain
𝔼[V_k+1]
≤h^2𝔼[V_0]/(1+h+k)^2 + 150𝒯σ^2C_α^2/(1+h+k) + 112𝒯^2σ^2fC_α^2/(1+h+k)|ℋ|·
Our result in (<ref>) implies that q_ℋ(x̅_k) converges to the optimal function value q_ℋ(x^⋆_ℋ) at a rate 𝒪(1/k), which is the same rate as in the Byzantine-free setting. In addition, our convergence complexity bound depends on the ratio f/ℋ, which is similar to the result in <cit.>.
Using Lemmas <ref> and <ref> we obtain
𝔼[ q_ℋ(x_k+1)- q_ℋ(x_k)] + 𝔼[W_k+1]
≤(-5𝒯β_k/6 + 2L𝒯β_k|ℬ_k|/μ|ℋ| + 110L^3𝒯^2β^2_k/μ^2) 𝔼[∇ q_ℋ(x̅_k)^2]
+ (14L^3𝒯^2α_kβ_k/μ^2+ 300L^4𝒯^2β^2_k/μ^2α_k)𝔼[∇ q_ℋ(x̅_k)^2]
+(1-α_k/2 + 78β_k𝒯/μ + 126(L+1)^2𝒯^2β_k)𝔼[W_k]
+30𝒯σ^2α^2_k + 16L𝒯^2σ^2α_kβ^2_k + 2𝒯σ^2α_k^2
+ 96L^2𝒯^2σ^2β^2_k + 6L^2𝒯^2σ^2α^2_kβ^2_k
+ 16𝒯^2σ^2fα^2_k/|ℋ| + 96L^2𝒯^2σ^2fα^2_k/|ℋ|,
which since β_k≤α_k≤ 1
gives
𝔼[ q_ℋ(x_k+1)- q_ℋ(x^⋆_k)] + 𝔼[W_k+1]
-𝔼[ q_ℋ(x_k)- q_ℋ(x^⋆_k)]
≤(-5𝒯β_k/6 + 2L𝒯β_k|ℬ_k|/μ|ℋ| + 110L^3𝒯^2β^2_k/μ^2) 𝔼[∇ q_ℋ(x̅_k)^2]
+ (14L^3𝒯^2α_kβ_k/μ^2+ 300L^4𝒯^2β^2_k/μ^2α_k)𝔼[∇ q_ℋ(x̅_k)^2]
+(1-α_k/2 + 78β_k𝒯/μ + 126(L+1)^2𝒯^2β_k)𝔼[W_k]
+150(L+1)^2𝒯^2σ^2α^2_k + 112(L+1)^2𝒯^2σ^2fα_k^2/|ℋ|·
By the definition of V_k in (<ref>) we have from the relation above
𝔼[V_k+1]
≤(1-9μ𝒯β_k/6+4L𝒯β_k|ℬ_k|/|ℋ|)𝔼[V_k]
+(9μ𝒯β_k/6-4L𝒯β_k|ℬ_k|/|ℋ|)𝔼[q_ℋ(x̅_k)-q_ℋ(x^⋆_ℋ)]
+(-5𝒯β_k/6 + 2L𝒯β_k|ℬ_k|/μ|ℋ| + 110L^3𝒯^2β^2_k/μ^2) 𝔼[∇ q_ℋ(x̅_k)^2]
+ (14L^3𝒯^2α_kβ_k/μ^2+ 300L^4𝒯^2β^2_k/μ^2α_k)𝔼[∇ q_ℋ(x̅_k)^2]
+(-α_k/2 + 78β_k𝒯/μ + 126(L+1)^2𝒯^2β_k)𝔼[W_k]
+(9μ𝒯β_k/6-4L𝒯β_k|ℬ_k|/|ℋ|)𝔼[W_k]
+150(L+1)^2𝒯^2σ^2α^2_k + 112(L+1)^2𝒯^2σ^2fα_k^2/|ℋ|,
which by Assumption <ref> gives
𝔼[V_k+1]
≤(1-9μ𝒯β_k/6 + 4L𝒯β_k|ℬ_k|/|ℋ|)𝔼[V_k]
+(-𝒯β_k/12+ 110L^3𝒯^2β^2_k/μ^2)𝔼[∇ q_ℋ(x̅_k)^2]
+ (14L^3𝒯^2α_kβ_k/μ^2+ 300L^4𝒯^2β^2_k/μ^2α_k)𝔼[∇ q_ℋ(x̅_k)^2]
+(-α_k/2 + 9μ𝒯β_k/6 + 204(L+1)^4𝒯^2β_k/μ^2)𝔼[W_k]
+150(L+1)^2𝒯^2σ^2α^2_k + 112(L+1)^2𝒯^2σ^2fα_k^2/|ℋ|
≤(1-9μ𝒯β_k/6 + 4L𝒯β_k|ℬ_k|/|ℋ|)𝔼[V_k]
+(-𝒯β_k/12+ 110L^3𝒯^2β^2_k/μ^2)𝔼[∇ q_ℋ(x̅_k)^2]
+ (14L^3𝒯^2α_kβ_k/μ^2+ 300L^4𝒯^2β^2_k/μ^2α_k)𝔼[∇ q_ℋ(x̅_k)^2]
+(-α_k/2 +206(L+1)^4𝒯^2β_k/μ^2)𝔼[W_k]
+150(L+1)^2𝒯^2σ^2α^2_k + 112(L+1)^2𝒯^2σ^2fα_k^2/|ℋ|
≤(1-9μ𝒯β_k/6 + 4L𝒯β_k|ℬ_k|/|ℋ|)𝔼[V_k]
+150(L+1)^2𝒯^2σ^2α^2_k + 112(L+1)^2𝒯^2σ^2fα_k^2/|ℋ|,
where the last inequality we use (<ref>) to have
0≤𝒯β_k/12- 110L^3𝒯^2β^2_k/μ^2 - 14L^3𝒯^2α_kβ_k/μ^2 - 300L^4𝒯^2β^2_k/μ^2α_k,
0 ≤α_k/2 - 206(L+1)^4𝒯^2β_k/μ^2·
To show (<ref>), we use (<ref>) into the relation above to obtain
𝔼[V_k+1] ≤(1-μβ_k𝒯/6)𝔼[V_k]
+150(L+1)^2𝒯^2σ^2α^2_k + 112(L+1)^2𝒯^2σ^2fα_k^2/|ℋ|,
which by using β_k = 12/μ𝒯(1+h+k) gives
𝔼[V_k+1] ≤(1-2/1+h+k)𝔼[V_k]
+ 150𝒯σ^2C_α^2/(1+h+k)^2 + 112𝒯^2σ^2fC_α^2/(1+h+k)^2|ℋ|.
Multiplying both sides of the above inequality by (1+h+k)^2 gives
(1+h+k)^2𝔼[V_k+1]
≤ (h+k)^2𝔼[V_k] + 40𝒯σ^2C_α^2 + 112𝒯^2σ^2fC_α^2/|ℋ|
≤ h^2𝔼[V_0]+ 150𝒯σ^2C_α^2(k+1) + 112𝒯^2σ^2fC_α^2(k+1)/|ℋ|,
which when diving both sides by (k + 1 + h)^2 gives (<ref>).
§ SIMULATIONS
In this section we present a few simulations to illustrate the convergence of Algorithm <ref> and the correctness of our theoretical results. For our simulations, we consider a network of N = 50 agents. Each non-faulty agent i has access to 100 noisy observations of a 10-dimensional vector x^⋆. Specifically, the sample set X^i comprises 100 samples distributed as X^i_j = x^⋆ + Z_j, where Z_j ∼𝒩(0, I_d). On the other hand, a Byzantine faulty agent j mimics the behavior of an honest agent but with different samples. Each sample for a Byzantine agent is given by X^j_j = 2 ×x^⋆ + Z_j, where Z_j ∼𝒩(0, I_d), similar to the honest agents. This implies that while honest agents send information corresponding to Gaussian noisy observations of x^⋆, the Byzantine agents send information corresponding to Gaussian noisy observations with the same variance but centered at 2 ×x^⋆.
We will simulate Algorithm <ref> in both strongly convex and PŁ conditions, where we set the local steps 𝒯 = 3. In each case, we vary the number of Byzantine agents f = 4,8,10 to study the convergence of our algorithm when this number is changing. For the strongly convex setting, we consider the local cost function of i^th agent as
q^i(x; X^i) = 1/2x - X^i^2.
For the PŁ condition, we consider the following local cost function
q^i(x; X^i) = 1/2x - X^i^2 + 1/2sin^2(x - X^i).
In this case, the global function ∑_i = 1^50 q^i(x; X^i) represents a non-convex function that satisfies the Polyak-Łojasiewicz (PL) condition.
Our simulation results are shown in Figs. <ref> and <ref> for strongly convex and PŁ conditions, respectively.
First, our simulations show that the optimization and gradient estimate errors converge to zero as expected. Second, the rates of convergence seem to be 𝒪(1/k), which are consistent in both cases. Finally, the algorithm converges slower as the number of faulty agents increases, agreeing with our theoretical bounds in Theorems <ref> and <ref>.
§ CONCLUSION
In this paper, we propose a new two-time-scale variant of the local SGD method to solve an exact Byzantine fault-tolerance problem under the 2f-redundancy condition.
Our theoretical analysis demonstrates that our approach effectively mitigates the impact of noise from stochastic gradients and the interference of Byzantine agents. Notably, our algorithm achieves an optimal rate 𝒪(1/k) when the underlying objective function satisfies either strong convexity or the PŁ condition, similar to that of the Byzantine-free setting.
IEEEtran
[Proofs of Lemmas <ref>–<ref>]
We now proceed to present the analysis of Lemmas <ref>–<ref>. First, we rewrite (<ref>) as follows
x̅_k+1 = x̅_k - 𝒯β_k∇ q_ℋ(x̅_k) - β_k/|ℋ|∑_i ∈ℋ∑_t=0^𝒯-1e^i_k,t + ℰ_x.
where
ℰ_x = 1/|ℋ|[∑_i∈ℬ_k(x^i_k,𝒯 -x_k) - ∑_i∈ℋ\ℋ_k(x^i_k,𝒯-x_k)].
Next, we will consider the following results that will be used in our analysis later.
For each honest client i ∈ℋ we have
x^i_k,t+1-x_k^2≤ 2L^2t^2β_k^2x_k-x^⋆_ℋ^2 + 2tβ_k^2∑_l=0^te^i_k,l^2.
By (<ref>) and (<ref>), we have
x^i_k,t+1-x_k^2 = ∑_l=0^t(x^i_k,l+1-x^i_k,l)^2
≤ t∑_l=0^tx^i_k,l+1-x^i_k,l^2 = β_k^2t∑_l=0^ty^i_k,l^2
= β_k^2t∑_l=0^te_k,t^i + ∇ q^i(x̅_k)^2
≤ 2β_k^2t∑_l=0^t(e_k,t^i^2 + ∇ q^i(x̅_k)^2)
= 2β_k^2t∑_l=0^t(e_k,t^i^2 + ∇ q^i(x̅_k -∇ q^i(x_ℋ^⋆)^2)
≤ 2L^2t^2β_k^2x_k-x^⋆_ℋ^2 + 2tβ_k^2∑_l=0^te^i_k,l^2,
where in the last equality we use (<ref>) to have ∇ q^i(x_ℋ^⋆) = 0 and the last inequality is due to Assumption <ref>. This concludes our proof.
For each honest client i ∈ℋ we have
𝔼[e_k,t+1^i^2]
≤ (1-α_k)𝔼[e^i_k,t^2] + α_kL^2𝔼[x^i_k,t-x̅_k^2] + α^2_kσ^2.
Using (<ref>) and (<ref>) we consider
e^i_k,t+1 = (1-α_k)y^i_k,t + α_k∇ q^i(x^i_k,t;Δ^i_k,t) - ∇ q^i(x_k)
= (1-α_k)e^i_k,t + α_k(∇ q^i(x^i_k,t;Δ^i_k,t) - ∇ q^i(x^i_k,t))
+α_k(∇ q^i(x^i_k,t)-∇ q^i(x_k)),
which by using Assumptions <ref> and <ref> gives
𝔼[e^i_k,t+1^2|𝒫_k,t]
= (1-α_k)^2e^i_k,t^2 + α^2_k∇ q^i(x^i_k,t)-∇ q^i(x_k)^2
+ α^2_k𝔼[∇ q^i(x^i_k,t;Δ^i_k,t) - ∇ q^i(x^i_k,t)^2|𝒫_k,t]
+ 2α_k(1-α_k)(∇ q^i(x^i_k,t)-∇ q^i(x_k))^Te^i_k,t
+ 2 α_k(1-α_k)𝔼[(∇ q^i(x^i_k,t,Δ^i_k,t) - ∇ q^i(x^i_k,t))|𝒫_k,t]^Te^i_k,t
+2α^2_k𝔼[(∇ q^i(x^i_k,t,Δ^i_k,t) - ∇ q^i(x^i_k,t))|𝒫_k,t]^T
× (∇ q^i(x^i_k,t)-∇ q^i(x_k)
= (1-α_k)^2e^i_k,t^2 + α^2_k∇ q^i(x^i_k,t)-∇ q^i(x_k)^2
+ α^2_k𝔼[∇ q^i(x^i_k,t;Δ^i_k,t) - ∇ q^i(x^i_k,t)^2|𝒫_k,t]
+ 2α_k(1-α_k)(∇ q^i(x^i_k,t)-∇ q^i(x_k))^Te^i_k,t
≤ (1-α_k)^2e^i_k,t^2 + L^2α^2_kx^i_k,t - x̅_k^2 + α^2_kσ^2
+ 2α_k(1-α_k)(∇ q^i(x^i_k,t) - ∇ q^i(x_k))^Te^i_k,t.
Taking the expectation on both sides of the preceding equation and using the Cauchy-Schwarz inequality we obtain (<ref>), i.e.,
𝔼[e^i_k,t+1^2]
≤ (1-α_k)^2𝔼[e^i_k,t^2] + L^2α^2_k𝔼[x^i_k,t - x̅_k^2^2] + α^2_kσ^2
+ α_k(1-α_k)𝔼[e^i_k,t^2]+ L^2α_k(1-α_k)𝔼[x^i_k,t-x_k^2]
= (1-α_k)𝔼[e^i_k,t^2] + L^2α_k𝔼[x^i_k,t - x̅_k^2] + α^2_kσ^2.
Let α_k satisfy for all k≥ 0
α_k≤1/2L𝒯·
Then the following holds
1/|ℋ|∑_i ∈ℋ∑_t=0^𝒯-1𝔼[e^i_k,t^2]
≤ 2𝒯𝔼[W_k] + 2𝒯σ^2α_k + 4L^4𝒯^3β_k^2𝔼[x̅_k-x_ℋ^⋆^2].
By taking (<ref>) recursively and since α_k < 1 we have
𝔼[e^i_k,t^2]
≤ (1-α_k)^t𝔼[e^i_k,0^2] + α_k^2σ^2∑_l=0^t-1(1-α_k)^t-1-ℓ
+ α_kL^2∑_ℓ = 0^t-1(1-α_k)^t-l-1𝔼[x^i_k,l-x̅_k^2]
≤ (1-α_k)^t𝔼[e^i_k,0^2] + α_kσ^2
+ α_kL^2∑_ℓ = 0^t-1(1-α_k)^t-l-1𝔼[x^i_k,l-x̅_k^2],
which by using (<ref>) gives
𝔼[e^i_k,t^2]
≤ (1-α_k)^t𝔼[e^i_k,0^2] + α_kσ^2
+2L^4𝒯^2α_kβ_k^2∑_ℓ=0^t-1(1-α_k)^t-ℓ-1𝔼[x̅_k-x_ℋ^⋆^2]
+2L^2𝒯α_kβ_k^2∑_ℓ=0^t-1(1-α_k)^t-ℓ-1∑_m = 0^ℓ-1𝔼[e_k,m^i^2]
≤ (1-α_k)^t𝔼[e^i_k,0^2] + α_kσ^2+2L^4𝒯^2β_k^2𝔼[x̅_k-x_ℋ^⋆^2]
+2L^2𝒯α_kβ_k^2∑_ℓ=0^t-1(1-α_k)^t-ℓ-1∑_m = 0^𝒯-1𝔼[e_k,m^i^2]
≤ (1-α_k)^t𝔼[e^i_k,0^2] + α_kσ^2 + 2L^4𝒯^2β_k^2𝔼[x̅_k-x_ℋ^⋆^2]
+2L^2𝒯β_k^2∑_m = 0^𝒯-1𝔼[e_k,m^i^2].
Using the relation above and the definition of W_k in (<ref>) we have
1/|ℋ|∑_i ∈ℋ∑_t=0^𝒯-1𝔼[e^i_k,t^2]
≤1/|ℋ|∑_i ∈ℋ∑_t=0^𝒯-1(1-α_k)^t𝔼[e^i_k,0^2] + 𝒯σ^2α_k
+ 2L^4𝒯^3β_k^2𝔼[x̅_k-x_ℋ^⋆^2]
+ 2L^2𝒯^2β_k^21/|ℋ|∑_i ∈ℋ∑_m = 0^𝒯-1𝔼[e_k,m^i^2]
≤𝔼[W_k] + 𝒯σ^2α_k + 2L^4𝒯^3β_k^2𝔼[x̅_k-x_ℋ^⋆^2]
+ 2L^2𝒯^2β_k^21/|ℋ|∑_i ∈ℋ∑_m = 0^𝒯-1𝔼[e_k,m^i^2],
where the last inequality we use the fact that 1-α_k≤ 1. Since β_k≤α_k≤ 1/2L𝒯 implying 1-2L^2T^2β_k^2 > 1/2, rearranging the preceding relation we obtain (<ref>).
Let α_k satisfy (<ref>). Then we have
𝔼[x_k+1 - x_k^2] ≤ 100L^2𝒯^2β^2_k𝔼[x̅_k-x^⋆_ℋ^2] + 40β^2_k𝒯^2𝔼[W_k]
+ 32𝒯^2σ^2α_kβ^2_k + 32𝒯^2σ^2fα^2_kβ^2_k/|ℋ|·
By (<ref>) and since ∇ q_ℋ(x^⋆_ℋ)) = 0 we have
x_k+1-x_k^2
= ℰ_k-𝒯β_k∇ q_ℋ(x_k)-β_k/|ℋ|∑_i∈ℋ∑_l=0^𝒯-1e^i_k,l^2
≤ 2ℰ_x^2 + 2𝒯β_k(∇ q_ℋ(x_k)+β_k/|ℋ|∑_i∈ℋ∑_l=0^𝒯-1e^i_k,l^2
≤ 2ℰ_x^2 + 4𝒯^2β_k^2∇ q_ℋ(x_k)^2 + 4β^2_k𝒯/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l^2
= 2ℰ_x^2 + 4𝒯^2β_k^2∇ q_ℋ(x_k)-∇ q_ℋ(x^⋆_ℋ))^2
+ 4β^2_k𝒯/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l^2
≤ 2ℰ_x^2 + 4L^2𝒯^2β^2_kx_k - x^⋆_ℋ^2 + 4β^2_k𝒯/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l^2.
Taking the expectation of the preceding relation and using (<ref>) give
𝔼[x̅_k+1-x̅_k^2]
≤ 2𝔼[ℰ_k^2] + (4L^2𝒯^2β^2_k + 16L^4𝒯^4β^4_k)𝔼[x̅_k-x^⋆_ℋ^2]
+ 8𝒯^2β^2_k𝔼[W_k].
Next, we analyze the term ℰ_x^2. For this using (<ref>), we have
ℰ_x^2 = 1/|ℋ|[∑_i∈ℬ_k(x^i_k,𝒯 -x_k) - ∑_i∈ℋ\ℋ_k(x^i_k,𝒯-x_k)]^2.
By (<ref>), we have x^i_k,𝒯-x_k≤x^j_k,𝒯-x_k for all i ∈ℬ_k and j ∈ℋ\ℋ_k. Thus, we obtain
ℰ_x^2 ≤2|ℬ_k|/|ℋ|^2∑_i ∈ℬ_kx^i_k,𝒯 - x_k^2 + 2|ℬ_k|/|ℋ|^2∑_i ∈ℋ\ℋ_kx^i_k,𝒯 - x_k^2
≤4|ℬ_k|/|ℋ|^2∑_i ∈ℋ\ℋ_kx^i_k,𝒯 - x_k^2,
which by (<ref>) yields
ℰ_x^2
≤4|ℬ_k|/|ℋ|^2∑_i ∈ℋ\ℋ_k(2L^2𝒯^2β_k^2x_k-x^⋆_ℋ^2+ 2𝒯β_k^2∑_l=0^𝒯-1e^i_k,l^2)
≤8L^2𝒯^2β_k^2|ℬ_k|^2/|ℋ|^2x_k-x^⋆_ℋ^2 + 8𝒯β_k^2|ℬ_k|/|ℋ|^2∑_i ∈ℋ\ℋ_k∑_l=0^𝒯-1e^i_k,l^2.
Using (<ref>) we obtain from the preceding relation
𝔼[ℰ_x^2]
≤(8L^2𝒯^2β^2_k|ℬ_k|^2/|ℋ|^2 + 32α_kβ^4_kL^4𝒯^4|ℬ_k|/|ℋ|)𝔼[x̅_k-x^⋆_ℋ^2]
+16β^2_k𝒯^2|ℬ_k|/|ℋ|𝔼[W_k] + 16α^2_kβ^2_k𝒯^2|ℬ_k|σ^2/|ℋ|,
which when using (<ref>) and β_kL𝒯≤ 1 gives (<ref>), i.e.,
𝔼[x̅_k+1-x̅_k^2]
≤(16L^2𝒯^2β^2_k|ℬ_k|^2/|ℋ|^2 + 64L^2𝒯^2β^2_k|ℬ_k|/|ℋ|)𝔼[x̅_k-x^⋆_ℋ^2]
+ (4L^2𝒯^2β^2_k + 16L^4𝒯^4β^4_k)𝔼[x̅_k-x^⋆_ℋ^2]
+(8𝒯^2β^2_k + 32𝒯^2β^2_k|ℬ_k|/|ℋ|)𝔼[W_k]
+ 32𝒯^2σ^2α_kβ^2_k + 32𝒯^2σ^2|ℬ_k|α^2_kβ^2_k/|ℋ|
≤ 100L^2𝒯^2β^2_k𝔼[x̅_k-x^⋆_ℋ^2] + 40β^2_k𝒯^2𝔼[W_k]
+ 32𝒯^2σ^2α_kβ^2_k + 32𝒯^2σ^2fα^2_kβ^2_k/|ℋ|,
where the last inequality we use |ℬ_k| ≤ f, |ℬ_k|/|ℋ|≤f/N-f≤ 1.
§.§ Proof of Lemma <ref>
From (<ref>) using y^i_k+1,0 = y^i_k,𝒯 we have
e^i_k+1,0 = y^i_k,𝒯- ∇ q^i(x_k+1)
= e^i_k,𝒯 + ∇ q^i(x_k)-∇ q^i(x_k+1).
Using the Cauchy-Schwarz inequality and Assumption <ref> we obtain
e^i_k+1,0^2 ≤(1+α_2/2)e^i_k,𝒯^2 + (1+2/α_k)L^2x_k+1-x_k^2
≤(1+α_2/2)e^i_k,𝒯^2 + 3L^2/α_kx_k+1-x_k^2.
Thus, we have
𝔼[W_k+1] = 1/|ℋ|∑_i ∈ℋ𝔼[e^i_k+1,0^2]
≤(1+α_k/2)1/|ℋ|∑_i ∈ℋ𝔼[e^i_k,𝒯^2] + 3L^2/α_k𝔼[x_k+1 - x_k^2].
By (<ref>), we consider
𝔼[e^i_k,𝒯^2] ≤ (1-α_k)^𝒯𝔼[e^i_k,0^2] + α_k^2σ^2∑_t=0^𝒯-1(1-α_k)^𝒯-1-t
+ α_kL^2∑_t = 0^𝒯-1(1-α_k)^𝒯-t-1𝔼[x^i_k,t-x̅_k^2]
≤ (1-α_k)𝔼[e^i_k,0^2] + σ^2𝒯α^2_k
+ α_kL^2∑_t = 0^𝒯-1𝔼[x^i_k,t-x̅_k^2]
≤ (1-α_k)𝔼[e^i_k,0^2] + L^4T^3α_kβ_k^2𝔼[x_k-x^⋆_ℋ^2
+ 2L^2𝒯α_kβ_k^2∑_t=0^𝒯e^i_k,t^2]+ σ^2𝒯α^2_k,
where the last inequality is due to (<ref>). Using this equation and (1+α_k/2) ≤ 3/2 we obtain
(1+α_k/2)1/|ℋ|∑_i ∈ℋ𝔼[e^i_k,𝒯^2]
≤(1+α_k/2)1/|ℋ|∑_i ∈ℋ(1-α_k)𝔼[e^i_k,0^2] + 2σ^2𝒯α^2_k
+ 2 L^4T^3α_kβ_k^2𝔼[x_k-x^⋆_ℋ^2]
+ 3L^2𝒯α_kβ_k^21/|ℋ|∑_i ∈ℋ∑_t=0^𝒯𝔼[e^i_k,t^2]
≤(1-α_k/2)𝔼[W_k] + 2σ^2𝒯α^2_k
+ 2 L^4T^3α_kβ_k^2𝔼[x_k-x^⋆_ℋ^2]
+ 3L^2𝒯α_kβ_k^21/|ℋ|∑_i ∈ℋ∑_t=0^𝒯𝔼[e^i_k,t^2],
where the last inequality we use
(1-α_k)(1+α_k/2) ≤ 1-α_k/2·
Thus, substitute the relation above into (<ref>) we have
𝔼[W_k+1] ≤(1-α_k/2)𝔼[W_k] + 2σ^2𝒯α^2_k
+ 2 L^4𝒯^3α_kβ_k^2𝔼[x_k-x^*_ℋ^2]
+ 3L^2𝒯α_kβ_k^21/|ℋ|∑_i ∈ℋ∑_t=0^𝒯𝔼[e^i_k,t^2]
+ 3L^2/α_k𝔼[x_k+1 - x_k^2]
≤(1-α_k/2)𝔼[W_k] + 2σ^2𝒯α^2_k
+ 2 L^4𝒯^3α_kβ_k^2𝔼[x_k-x^*_ℋ^2]
+ 6L^2𝒯^2α_kβ_k^2𝔼[W_k] + 6L^2𝒯^2σ^2α_k^2β_k^2
+ 12L^6𝒯^4α_kβ_k^4𝔼[x̅_k-x_ℋ^⋆^2]
+ 3L^2/α_k𝔼[x_k+1 - x_k^2]
= (1-α_k/2+ 6L^2𝒯^2α_kβ_k^2)𝔼[W_k]
+ 2σ^2𝒯α^2_k + 6L^2𝒯^2σ^2α_k^2β_k^2
+ (2 L^4𝒯^3α_kβ_k^2 + 12L^6𝒯^4α_kβ_k^4)𝔼[x̅_k-x_ℋ^⋆^2]
+ 3L^2/α_k𝔼[x_k+1 - x_k^2],
where we use (<ref>) to obtain the second inequality.
Next, applying Lemma <ref>, the above inequality becomes
𝔼[W_k+1] ≤(1-α_k/2)𝔼[W_k]
+(6L^2𝒯^2α_kβ^2_k + 120L^2𝒯^2β^2_k/α_k)𝔼[W_k]
+(2L^4𝒯^3α_kβ^2_k + 12L^6𝒯^4α_kβ^4_k)𝔼[x̅_k-x^⋆_ℋ^2]
+ 300L^4𝒯^2β^2_k/α_k𝔼[x̅_k-x^⋆_ℋ^2]+ 2σ^2𝒯α^2_k
+ 96L^2𝒯^2σ^2β^2_k+ 6L^2𝒯^2σ^2α^2_kβ^2_k
+ 96L^2𝒯^2σ^2fα_kβ^2_k/|ℋ|.
Using β_k ≤α_k and β_kL𝒯≤ 1 we obtain
𝔼[W_k+1] ≤(1-α_k/2)𝔼[W_k]
+ (6L𝒯β_k + 120L^2𝒯^2β_k)𝔼[W_k]
+(14L^3𝒯^2α_kβ_k + 300L^4𝒯^2β^2_k/α_k)𝔼[x̅_k-x^⋆_ℋ^2]
+ 2σ^2𝒯α^2_k + 96L^2𝒯^2σ^2β^2_k
+ 6L^2𝒯^2σ^2α^2_kβ^2_k + 96L^2𝒯^2σ^2fα_kβ^2_k/|ℋ|
≤(1-α_k/2 + 126(L+1)^2𝒯^2β_k)𝔼[W_k]
+(14L^3𝒯^2α_kβ_k + 300L^4𝒯^2β^2_k/α_k)𝔼[x̅_k-x^⋆_ℋ^2]
+ 2σ^2𝒯α^2_k + 96L^2𝒯^2σ^2β^2_k
+ 6L^2𝒯^2σ^2α^2_kβ^2_k + 96L^2𝒯^2σ^2fα_kβ^2_k/|ℋ|·
§.§ Proof of Lemma <ref>
Using (<ref>) we have
x_k+1 -x^*_ℋ^2
= x̅_k-x^*_ℋ - 𝒯β_k∇ q_ℋ(x̅_k)^2 + ℰ_x -β_k/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l^2
-2(x̅_k-x^*_ℋ - 𝒯β_k∇ q_ℋ(x̅_k))^T(ℰ_x -β_k/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l)
= P_1 + P_2 + P_3 ,
where P_i, for i = 1, 2, 3, are defined in that order. Firstly, using ∇ q_ℋ(x^⋆_ℋ) = 0 along with Assumption <ref> and <ref> we analyze the term P_1 as
x̅_k-x^⋆_ℋ - 𝒯β_k∇ q_ℋ(x̅_k)^2
= x̅_k-x^⋆_ℋ^2-2𝒯β_k∇ q_ℋ(x̅_k)^T(x̅_k-x^⋆_ℋ)
+ 𝒯^2β^2_K∇ q_ℋ(x̅_k)^2
= x̅_k-x^⋆_ℋ^2-2𝒯β_k(∇ q_ℋ(x̅_k)-∇ q_ℋ(x^⋆_ℋ))^T(x̅_k-x^⋆_ℋ)
+ 𝒯^2β^2_K∇ q_ℋ(x̅_k)-∇ q_ℋ(x^⋆_ℋ)^2
≤ (1-2μ𝒯β_k + L^2𝒯^2β^2_k)x̅_k-x^⋆_ℋ^2
Secondly, using Cauchy-Schwarz inequality term P_2 can be expressed as
ℰ_x -β_k/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l^2
≤ 2ℰ_x^2+ 2β^2_k𝒯(1/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l^2).
Taking the expectation on both sides of the above inequality, and applying Lemma <ref> along with (<ref>) from Lemma <ref> with α_k≤ 1 , we obtain the following
𝔼[ℰ_x -β_k/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l^2]
≤(16L^2𝒯^2β^2_k|ℬ_k|/|ℋ| + 64L^3𝒯^3β^3_k|ℬ_k|/|ℋ|)𝔼[x̅_k -x^⋆_ℋ^2]
+ 8L^4𝒯^4β^4_k𝔼[x̅_k -x^⋆_ℋ^2]
+(32𝒯^2β^2_k|ℬ_k|/|ℋ| + 4𝒯^2β^2_k)𝔼[W_k]
+ 4𝒯^2σ^2α_kβ^2_k + 32𝒯^2σ^2fα^2_kβ^2_k/|ℋ|
≤ 88L^2𝒯^2β^2_k𝔼[x̅_k -x^⋆_ℋ^2] + 36𝒯β_k/μ𝔼[W_k]
+ 4𝒯^2σ^2α_kβ^2_k + 32𝒯^2σ^2fα^2_kβ^2_k/|ℋ|,
where the last inequality is obtained using β_kL𝒯≤ 1 and μ≤ L.
Thirdly we express term P_3 from (<ref>) as
-2(x̅_k-x^⋆_ℋ - 𝒯β_k∇ q_ℋ(x̅_k))^T(ℰ_x -β_k/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l)
= -2(x̅_k-x^⋆_ℋ - 𝒯β_k∇ q_ℋ(x̅_k))^Tℰ_x
+2(x̅_k-x^⋆_ℋ - 𝒯β_k∇ q_ℋ(x̅_k))^T(β_k/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l)
= P_3a + P_3b,
where P_3a and P_3b are defined in that order.
Next, we apply the Cauchy-Schwarz inequality 2a^Tb ≤ηa^2 + b^2/η for any η >0, and use Assumptions <ref> and <ref> to analyze term P_3a. Thus, we have
-2(x̅_k-x^⋆_ℋ - 𝒯β_k∇ q_ℋ(x̅_k))^Tℰ_x
≤3L𝒯β_k|ℬ_k|/|ℋ|x̅_k-x^⋆_ℋ - 𝒯β_k∇ q_ℋ(x̅_k)^2 + |ℋ|/3L𝒯β_k|ℬ_k|ℰ_x^2
≤3L𝒯β_k|ℬ_k|/|ℋ| (x̅_k-x^⋆_ℋ^2+L^2𝒯^2β_k^2x̅_k-x^⋆_ℋ^2)
+ |ℋ|/3L𝒯β_k|ℬ_k|ℰ_x^2.
Next taking expectation on both sides of the above inequality and using (<ref>) from Lemma <ref> along with α_k≤ 1, we have
-2𝔼[(x̅_k-x^⋆_ℋ - 𝒯β_k∇ q_ℋ(x̅_k))^Tℰ_x]
≤(17L𝒯β_k|ℬ_k|/3|ℋ|+32L^2𝒯^2β_k^2/3+3L^3𝒯^3β_k^3|ℬ_k|/|ℋ|)𝔼[x̅_k-x^⋆_ℋ^2]
+ 16β_k𝒯/3L𝔼[W_k] + 16𝒯σ^2α^2_kβ_k/3L
≤(17L𝒯β_k|ℬ_k|/3|ℋ|+ 14L^2𝒯^2β^2_k)𝔼[x̅_k-x^⋆_ℋ^2]
+ 16𝒯β_k/3μ𝔼[W_k] + 16𝒯σ^2α^2_kβ_k/3μ,
where the last inequality is obtained using β_kL𝒯≤ 1, |ℬ_k|/|ℋ|≤ 1 and μ≤ L. To analyze the term P_3b from (<ref>), we use the Cauchy-Schwarz along with Assumption <ref> and <ref> to obtain
2(x̅_k-x^⋆_ℋ - 𝒯β_k∇ q_ℋ(x̅_k))^T(β_k/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l)
≤μ𝒯β_k/18x̅_k-x^⋆_ℋ -𝒯β_k∇ q_ℋ(x̅_k)^2
+ 18β_k/μ(1/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l^2)
≤μ𝒯β_k/18(x̅_k-x^⋆_ℋ^2 + L^2𝒯^2β^2_kx̅_k-x^⋆_ℋ^2)
+ 18β_k/μ(1/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l^2).
Next, taking expectation on both the sides of the above inequality along with the result from Lemma <ref> we have
2𝔼[(x̅_k-x^⋆_ℋ - 𝒯β_k∇ q_ℋ(x̅_k))^T(β_k/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l)]
≤(μ𝒯β_k/18 + μ L^2𝒯^3β^3_k/18+72L^4𝒯^3α_kβ^3_k/μ)𝔼[x̅_k-x^⋆_ℋ^2]
+36β_k𝒯/μ𝔼[W_k] + 36𝒯σ^2α^2_kβ_k/μ
≤(μ𝒯β_k/18 + L^2𝒯^2β^2_k/18+72L^2𝒯α_kβ_k/μ)𝔼[x̅_k-x^⋆_ℋ^2]
+36β_k𝒯/μ𝔼[W_k] + 36𝒯σ^2α^2_kβ_k/μ,
where the last inequality is obtained using μ≤ L and β_kL𝒯≤ 1. Putting back the results from (<ref>) and (<ref>) back into (<ref>), we have
-2𝔼[(x̅_k-x^⋆_ℋ - 𝒯β_k∇ q_ℋ(x̅_k))^T(ℰ_x -β_k/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l)]
≤(μ𝒯β_k/18+17L𝒯β_k|ℬ_k|/3|ℋ|)𝔼[x̅_k-x^⋆_ℋ^2]
+ (15L^2𝒯^2β^2_k + 72L^2𝒯α_kβ_k/μ)𝔼[x̅_k-x^⋆_ℋ^2]
+42𝒯β_k/μ𝔼[W_k] + 42𝒯σ^2α^2_kβ_k/μ.
Finally, by taking the expectation on both sides of (<ref>), substituting the expressions for P_1, P_2, and P_3 from (<ref>), (<ref>), and (<ref>), respectively we obtain
𝔼[x̅_k+1-x^⋆_ℋ^2]
≤(1-35μ𝒯β_k/18 + 17L𝒯β_k|ℬ_k|/3|ℋ|+ 103L^2𝒯^2β^2_k)𝔼[x̅_k-x^⋆_ℋ^2]
+ 72L^2𝒯α_kβ_k/μ𝔼[x̅_k-x^⋆_ℋ^2] + 78𝒯β_k/μ𝔼[W_k]
+4𝒯^2σ^2α_kβ^2_k + 48𝒯σ^2α^2_kβ_k/μ + 32𝒯^2σ^2fα^2_kβ^2_k/|ℋ|.
This concludes our proof.
§.§ Proof of Lemma <ref>
To prove lemma <ref> we require the following lemma.
ℰ_k≤2L𝒯β_k|ℬ_k|/μ|ℋ|∇ q_ℋ(x̅_k) + 2β_k(1/|ℋ|∑_i ∈ℋ∑_t=0^𝒯-1e^i_k,t).
Using the definition of ℰ_k from (<ref>) we have
ℰ_x≤1/|ℋ|∑_i ∈ℬ_kx^i_k,𝒯-x̅_k + 1/|ℋ|∑_i ∈ℋ\ℋ_kx^i_k,𝒯-x̅_k.
By (<ref>), there exists j ∈ℋ\ℋ_k such that x^i_k,t-x_k≤x^j_k,t-x_k for all agent i ∈ℬ_k using which the above inequality becomes
ℰ_x≤|ℬ_k|/|ℋ|x^j_k,𝒯-x̅_k +1/|ℋ|∑_i ∈ℋ\ℋ_kx^j_k,𝒯-x̅_k.
Here, we consider that there exists an agent j ∈ℋ\ℋ_k such that the quantity x^j_k,t-x_k is maximum over all the agents in the set ℋ\ℋ_k. Further since |ℋ\ℋ_k| = |ℬ_k|, we have
x^j_k,t-x_k^2 ≤1/|ℋ\ℋ_k|∑_i ∈ℋ\ℋ_kx^i_k,t-x_k^2
=1/|ℬ_k|∑_i ∈ℋ\ℋ_kx^i_k,t-x_k^2.
Next using (<ref>) which implies ∇ q^i(x^*_ℋ) =0 which with the result from (<ref>), we express (<ref>) as
ℰ_k ≤2/|ℋ|∑_ℋ\ℋ_kx^i_k,𝒯-x̅_k
≤2/|ℋ|∑_ℋ\ℋ_k(-𝒯β_k∇ q^i(x̅_k)-β_k∑_t=0^𝒯-1e^i_k,t)
≤2𝒯β_k/|ℋ|∑_i ∈ℋ\ℋ_k∇ q^i(x̅_k)-∇ q^i(x^⋆_ℋ)
+2β_k/|ℋ|∑_i ∈ℋ\ℋ_k∑_t=0^𝒯-1e^i_k,t
≤2L𝒯β_k|ℬ_k|/|ℋ|x̅_k-x^⋆_ℋ + 2β_k(1/|ℋ|∑_i ∈ℋ\ℋ_k∑_t=0^𝒯-1e^i_k,t)
≤2L𝒯β_k|ℬ_k|/μ|ℋ|∇ q_ℋ(x̅_k)+ 2β_k(1/|ℋ|∑_i ∈ℋ∑_t=0^𝒯-1e^i_k,t),
where the second last inequality is due to Assumptiom <ref> and last inequality is obtained using Assumption <ref>.
§.§ Proof of Lemma <ref>
Assumption <ref> implies q_ℋ has Lipschitz continuous gradient using which we have
q_ℋ(x_k+1)- q_ℋ(x_k)
≤∇ q_ℋ(x_k)^T(x_k+1-x_k) + L/2x_k+1-x_k^2.
Using (<ref>), we analyze the first term in the right hand side of the above equation as the following
∇ q_ℋ(x_k)^T(x_k+1-x_k)
= -𝒯β_k∇ q_ℋ(x_k)^2 -β_k∇ q_ℋ(x_k)^T(1/|ℋ|∑_i ∈ℋ∑_t=0^𝒯-1e^i_k,t)
+ ℰ_x^T∇ q_ℋ(x_k)
= A_1 + A_2 +A_3,
where A_i with i=1,2,3 are defined in that order. To analyze A_2, we apply Assumption <ref> and utilize the Cauchy-Schwarz inequality. Thus, we have
-β_k∇ q_ℋ(x_k)^T(1/|ℋ|∑_i ∈ℋ∑_t=0^𝒯-1e^i_k,t)
≤β_k𝒯/12∇ q_ℋ(x̅_k)^2 + 3β_k/𝒯1/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l^2
≤β_k𝒯/12∇ q_ℋ(x̅_k)^2 + 3β_k(1/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l^2).
Taking expectation on both the sides of the above inequality and using Lemma <ref> along with Assumption <ref> we have
-𝔼[β_k∇ q_ℋ(x_k)^T(1/|ℋ|∑_i ∈ℋ∑_t=0^𝒯-1e^i_k,t)]
≤(β_k𝒯/12 + 12L^4𝒯^3β^3_k/μ^2)𝔼[∇ q_ℋ(x̅_k)^2] + 6β_k𝒯𝔼[W_k]
+ 6σ^2𝒯α_kβ_k.
Next, to analyze term A_3 from (<ref>) we use Lemma <ref> and Cauchy-Schwarz ineuqlity to obtain
ℰ_x^T∇ q_ℋ(x_k) ≤ℰ_x∇ q_ℋ(x_k)
≤2L𝒯β_k|ℬ_k|/μ|ℋ|∇ q_ℋ(x_k)^2
+ 2β_k∇ q^ℋ(x̅_k)(1/|ℋ|∑_i ∈ℋ∑_l=0^𝒯-1e^i_k,l)
≤(2L𝒯β_k|ℬ_k|/μ|ℋ| + 𝒯β_k/12)∇ q_ℋ(x̅_k)^2
+ 12β_k(1/|ℋ|∑_i ∈ℋ∑_t=0^𝒯-1e^i_k,t^2).
Taking expectation on both sides of the above inequality and using Lemma <ref> we have
𝔼[ℰ_x^T∇ q_ℋ(x̅_k)]
≤(2L𝒯β_k|ℬ_k|/μ|ℋ| + 𝒯β_k/12 + 48L^4𝒯^3β^3_k/μ^2)𝔼[∇ q_ℋ(x̅_k)^2]
+ 24𝒯β_k𝔼[W_k] + 24𝒯σ^2α_kβ_k.
Finally putting back the relations from (<ref>) and (<ref>) back into (<ref>) and using L𝒯β_k≤ 1 we get
𝔼[∇ q_ℋ(x̅_k)^T(x̅_k+1-x̅_k)]
≤(-5β_k𝒯/6 + 2Lβ_k𝒯|ℬ_k|/μ|ℋ| + 60L^3𝒯^2β^2_k/μ^2)𝔼[∇ q_ℋ(x̅_k)^2]
+ 30β_k𝒯𝔼[V^k_2]+30σ^2𝒯α^2_k.
Next, using Lemma <ref>, we analyze the second term in the right hand side of (<ref>). Thus we have
L/2𝔼[x̅_k+1-x̅_k^2]
≤ 50L^3𝒯^2β^2_k𝔼[x̅_k-x^*_ℋ^2] + 20L𝒯^2β^2_k𝔼[W_k]
+ 16L𝒯^2σ^2α_kβ^2_k + 16L𝒯^2σ^2fα^2_kβ^2_k/|ℋ|
≤50L^3𝒯^2β^2_k/μ^2𝔼[∇ q_ℋ(x̅_k)^2] + 20𝒯β_k𝔼[W_k]
+ 16L𝒯^2σ^2α_kβ^2_k + 16L𝒯^2σ^2fα^2_kβ^2_k/|ℋ|,
where the last inequality is obtained using Assumption <ref> and the condition L𝒯β_k≤ 1.
Putting back the results from (<ref>) and (<ref>) back into (<ref>) we immediately obtain (<ref>).
|
http://arxiv.org/abs/2409.03588v1 | 20240905144311 | Costs Estimation in Unit Commitment Problems using Simulation-Based Inference | [
"Matthias Pirlet",
"Adrien Bolland",
"Gilles Louppe",
"Damien Ernst"
] | cs.LG | [
"cs.LG"
] |
Simplified EPFL GaN HEMT Model
This project is funded by the Swiss National Science Foundation - project 200021 213116.
Farzan Jazaeri, Majid Shalchian, Ashkhen Yesayan, Amin Rassekh, Anurag Mangla,
Bertrand Parvais, and Jean-Michel Sallese
Farzan Jazaeri, Ashkhen Yesayan, and Jean-Michel Sallese are with the Electron Device Modeling and Technology Laboratory (EDLAB) of the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland (e-mail:[email protected]).Majid Shalchian is with the department of Electrical Engineering, Amirkabir University of Technology. Amin Rassekh is with InCize, Louvain-la-Neuve, Belgium. Anurag Mangla is an alumnus of EPFL.
Bertrand Parvais is affiliated with IMEC in Leuven and holds a position as a Guest Professor at Vrije Universiteit Brussels, Belgium.
September 9, 2024
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The Unit Commitment (UC) problem is a key optimization task in power systems to forecast the generation schedules of power units over a finite time period by minimizing costs while meeting demand and technical constraints. However, many parameters required by the UC problem are unknown, such as the costs. In this work, we estimate these unknown costs using simulation-based inference on an illustrative UC problem, which provides an approximated posterior distribution of the parameters given observed generation schedules and demands. Our results highlight that the learned posterior distribution effectively captures the underlying distribution of the data, providing a range of possible values for the unknown parameters given a past observation. This posterior allows for the estimation of past costs using observed past generation schedules, enabling operators to better forecast future costs and make more robust generation scheduling forecasts. We present avenues for future research to address overconfidence in posterior estimation, enhance the scalability of the methodology and apply it to more complex UC problems modeling the network constraints and renewable energy sources.
§ INTRODUCTION
Forecasting the generation schedule of power generating units whose energy is sold on the electricity market is key for actors like electricity traders and transmission system operators. Accurate forecasts ensure the efficient and secure power system operation, but market liberalization complicates forecasting due to the unknown behavior of multiple market participants. An approach to forecast the generation schedule is to model the scheduling of generation units of all agents through a centralized total-cost minimization problem. Economic theory supports this strategy, suggesting that well-designed competitive markets yield efficient outcomes, similar to those achieved by centralized decision-making <cit.>. However, the size of these models, with numerous variables and parameters, makes it difficult to solve, leading to the use of reduced models where multiple power plants are aggregated into fewer representative units. The reduced optimization problem is called the Unit Commitment (UC) problem. This UC problem provides a generation schedule over a fixed time interval as a function of extrinsic parameters including technical constraints of the units, energy demand, costs related to fuel, start-up and transmission. However, costs parameters are often unknown and are addressed in the optimization process either by expert knowledge or by robust or stochastic optimization techniques that handle the uncertainty <cit.>. In practice, the generation schedule is made public everyday, allowing the inference of the unknown parameters of the UC model through a probabilistic inverse problem. By estimating a distribution over these parameters, operators can perform better informed forecast of the costs in the short term, leading to more accurate forecasts of generation schedules.
In this paper, we present an illustrative UC problem and apply simulation-based inference (SBI) <cit.> to estimate unknown cost parameters. SBI, widely applied in fields such as particle physics <cit.>, climate science <cit.> and robotics <cit.>, estimates the posterior distribution of model parameters based on observations, capturing inherent uncertainty in these parameters. The simulator, i.e. the UC model, is complex and computationally expensive, making traditional methods like Markov Chain Monte Carlo (MCMC) impossible to use for inference. We propose to use Neural Posterior Estimation (NPE) <cit.> which is an amortized method, meaning that the inference model is trained on a dataset of simulations and then can be used to make inference on new observations without having to retrain the model. In contrast, MCMC would require running the full simulation iteratively for each new observation, making NPE a significantly faster and more scalable solution.
§ PROBLEM FORMULATION
The UC problem determines the generation schedules of power units over a finite time period while meeting demand scenarios and adhering to various technical constraints. These constraints include generation limits, ramping rates, and start-up/shutdown durations, all dictated by the physical characteristics of each unit. Although some of these parameters are well-known to all market participants, key parameters, such as the costs of producing one unit of energy driven by trading strategies for purchasing the fuel to produce it, still remain unknown. This work focuses on estimating these unknown costs parameters from recent historical data, which are critical for improving the accuracy of UC models and enhancing generation scheduling predictions.
Formally, the solution of the UC problem can be written as G_t = f(ψ_t, θ_t, δ_t), where f defines the UC optimization problem solved over T time steps that will be used to construct the inverse probabilistic model. The vector ψ includes known physical characteristics of generation units, such as generation limits, start-up costs, and ramping rates, G represents the generation schedule for each unit at each time step, and δ denotes the demand. Finally, θ represents unknown parameters, like fuel costs, that we want to estimate to better forecast them in the short term future to improve generation scheduling. All these parameters are defined for each time step t of the horizon of T time steps
In electricity markets, market operators publicly release historical data shortly after operations, including estimated demands δ's which are the forecasted demand values used during the scheduling process. These estimates inform real-time operational decisions and are influenced by various predictive models. Realized generation schedules, G_i which are the actual generation outputs corresponding to the estimated demand, reflect the solutions derived from the UC problem at the time. These publicly available historical data {(δ_i,G_i)}_i=1^N are used to construct empirical prior joint distribution for the demand, p(δ) capturing typical demand profiles and their variability over time. The unknown θ parameters are never observed but are known to be in a certain range, a prior p(θ) can be constructed as a uniform distribution over this range.
The primary objective of this work is to estimate the posterior distribution of the unknown cost parameters θ given the available historical data p(θ| G, δ), as stated in [introduction]Section 1, to better forecast costs parameters that will allow for better informed generation scheduling knowing the uncertainty in the parameters.
§ SBI FOR UC PARAMETERS ESTIMATION
Bayes' rule can be used to write down the posterior distribution p(θ | G, δ). This requires knowing the likelihood p(G|θ, δ), the prior distributions p(θ) and p(δ) assuming θ and δ marginally independent, and the evidence p(G). SBI is used to estimate a neural surrogate of the posterior distribution using simulated observations. Specifically, we use NPE for that purpose, which maximizes the expected log-posterior density 𝔼_p(G,θ, δ)[log q_ϕ(θ | G, δ)]
where q_ϕ(θ|G, δ) is a neural density estimator, such as a normalizing flow, with parameters ϕ. The expectation 𝔼_p(G,θ, δ) of the joint distribution can be estimated by first sampling from the prior costs p(θ) and the prior demand p(δ) and then feeding these samples into the forward model. In practice, this model corresponds to the UC problem defined in [formulation]Section 2 for the fixed parameter ψ.
§ EXPERIMENTS
The practical implementation focuses on an illustrative UC problem, with J = 9 generation units, where all generation costs θ∈ℝ^J are assumed unknown and must be estimated. This problem spans T=24 hourly timesteps, representing a single day (see [sec:math_formulation]Appendix C for its mathematical formulation). Demand-side management (DSM) is integrated as the 10^th unit, designed to adapt electricity demand by encouraging consumers to shift usage during periods of excess or insufficient supply. This DSM unit offers high flexibility, allowing it to start or stop instantly, with ramping rates that can reach its maximum capacity at any given time. However, this flexibility incurs higher generation and start-up costs.
The parameters ψ include stat-up costs, maximum rates for increasing and decreasing production, minimum and maximum power at which we can start and stop the unit, minimum time a unit must remain active or inactive after being turned on or off, and upper/lower generation limits. As said in [formulation]Section 2, these parameters are known and static over the time horizon considered. The parameters θ are the generating costs of the 9 units. Given the day-long horizon, these costs are assumed static but unknown, with a prior distribution p(θ) modeled as uniform.
In this scenario, the prior distribution of the demand parameter δ is synthetically constructed to mimic realistic fluctuations in electricity consumption, following a sinusoidal pattern. The base demand is modulated to create peaks and troughs in the demand profile, to reflect the typical diurnal variations observed in real-world electricity consumption patterns. To introduce variability and simulate real-world uncertainties, Gaussian noise with zero-mean and standard deviation of 10% of the peak demand is added to the demand signal.
Training and validation sets are generated with 2^16 simulations each, using the joint distribution p(G, θ, δ) = p(θ) p(δ) p(G| θ, δ) to produce parameter-observation pairs (θ, (δ,G)). With these pairs, we apply NPE, as described in [SBI]Section 3, and compare the performance of two types of flows, namely, Masked Autoregressive Flow (MAF) <cit.> and Neural Spline Flow (NSF) <cit.>, for posterior estimation q_ϕ(θ|G, δ). Both models are composed of 3 transformations, each parametrized by a masked Multi-Layer Perceptron (MLP) with 3 hidden layers of size 256 and ReLU activation functions. The NPE method is trained using the Adam optimizer <cit.> with a batch size of 256 and a learning rate of 0.001 over 100 epochs. The best model is selected on the validation loss, with the learning curve shown in Appendix in Figure <ref>.
To assess the resulting posterior distribution, we first sample parameters θ^* and demand δ^* from their priors p(θ) and p(δ), generating a corresponding generation schedule G^*. Using Monte-Carlo sampling, we estimate the density of q_ϕ(θ| G^*, δ^*) and visualize the results through corner plots (Figure <ref>). These plots show the marginal and joint distributions of the sampled parameters, with the true parameter values θ^* overlaid on top.
Next, we assess the consistency of the NPE posterior by calculating the expected coverage probability across various credible levels. In essence, we determine the probability that the true parameters sampled from p(θ, G, δ) lies within the smallest region of probability 1-α of the learned posterior q_ϕ(θ|G_i, δ_i).
Formally, we compute the coverage probability 𝔼_p(θ, G, δ)[ 1{θ_i ∈Θ_q_ϕ(θ|G_i, δ_i) (1-α) }] where, Θ_q_ϕ(θ|G_i, δ_i) is the highest posterior density region of the posterior distribution <cit.>, defined as:
Θ_q_ϕ(θ|G_i, δ_i)(1-α) =argmin_Ω{Ω | 𝔼 [ 1{θ∈Ω}] = 1-α}
To compute this expected coverage probability, we repeatedly sample from the joint distribution p(θ, δ, G) to obtain pairs (θ_i, (G_i, δ_i)). We then sample θ from the learned posterior distribution q_ϕ(θ| G, δ) for each simulated observation (G_i, δ_i) and determine the 1-α highest posterior density region. Well-calibrated posteriors should have an expected coverage probability close to the credibility level 1-α. If the expected coverage probability falls below 1-α, it suggests overconfident posteriors. If above, it indicates conservative posteriors. This comprehensive evaluation aids in assessing the reliability of the approximate posterior distributions. In our case, the coverage curve (Figure <ref>) is computed using the two trained flows and a test set of 2^12 pairs, and shows slight overconfidence. While it is important for the posterior distribution to accurately center around the true parameter values, a slight overconfidence in the distribution's width might not pose a significant issue in our context.
Additional assessment of the results is provided in [sec:add_ass]Appendix B, which further confirms the accuracy of the learned flow in capturing the true posterior distribution. Specifically, the assessment reveals that the posterior distribution is well-centered, as evidenced by the fact that only a few parameters sampled from this learned posterior distribution q_ϕ(θ| G, δ) generate a generation schedule that deviates significantly from the true one G^*.
§ CONCLUSION
In this work, we tackled the UC subproblem of estimating unknown parameters using SBI, which provides an approximation of the posterior probability distribution p(θ| G, δ). This approach allows for quick inference while capturing parameter uncertainty. This posterior distribution provides a range of possible values for the unknown parameters rather than just a single estimate, enabling operators to account for uncertainties in their decision-making process.
Future research should address the overconfidence in posterior estimation, as discussed in Section <ref>. Possible solutions include ensembles methods which average predictions from multiple models to improve reliability <cit.>, and introducing regularization terms either to the loss function to encourage more balanced and conservative model <cit.> or to directly penalize overconfident coverage <cit.>.
To increase the granularity of UC problems, resulting in hundreds of parameters, future work should focus on enhancing the function approximator, as current NPE methods are limited to handling tens of parameters. For instance, <cit.> introduces flow matching techniques to improve the scalability and computational efficiency of SBI. Additionally, active learning strategies are useful for dealing with high-dimensional parameter spaces and costly sampling processes. A method for selecting the most informative data points to optimize the calibration process has also been proposed <cit.>, focusing computational resources on areas with the greatest uncertainty to enhance the efficiency and accuracy of simulation-based inference in large-scale systems.
Finally, the next step is to apply these approaches over longer time horizons, such as two years, and incorporate renewable energy sources, which introduce additional uncertainty into the UC problem. This broader application will test the robustness and scalability of the methodology in reality.
§ ACKNOWLEDGMENTS
The authors would like to thank Engie, especially Alexandre Huynen for sharing expert knowledge about the problem. They also express gratitude to Arnaud Delaunoy for his valuable comments on this manuscript and to François Rozet for his help during the experimental setup. Adrien Bolland acknowledges the financial support from a research fellowship by the F.R.S.-FNRS.
plainnat
§ APPENDIX / SUPPLEMENTAL MATERIAL
§ ADDITIONAL ASSESSMENTS
To further assess the results from section [instance]4, we aim to conduct a quantitative analysis of the posterior predictive distribution p(G|G^*) which is the distribution of the generation schedules produced by sampling parameters from q_ϕ(θ| G^*, δ). The receipt to do that is by sampling 2^12 parameters from the posterior q_ϕ(θ| G^*, δ) given the observed generation schedule G^*. This observed generation schedule is produced by sampling a parameter θ^* and a demand δ^* from their respective prior distributions p(θ) and p(δ), and generate, using the model, a generation schedule G^*. Then, when θ's are sampled from the posterior, they are passed one by one through the UC problem f(ψ, θ,δ), where f is defined in[formulation]Section 2. Figure [Sanity_check_all]4 show the posterior predictive distributions p(G|G^*) for various quartiles [explain what it is] against the true generation output G^*. On the one hand, we see that for the power plants that need to be active, the 68.7% quantile is well constrained around the true generation schedule. On the other hand, when the plant should not start, the spread is wider for the 95.5% and 99.7% quartiles, resulting in a slight displacement of the median compared to the true observed generation schedule. In conclusion, the parameter distribution learned from NPE produces results that are consistent with the observed generation schedule.
§ MATHEMATICAL FORMULATION
min_Ξ ∑_t ∈ T∑_j ∈ J( c_j(g_j(t)) + c_j^U y_j(t) )
s.t. ∑_j ∈ J g_j(t) = D(t), ∀ t ∈ T
∑_j ∈ Jg_j(t) ≥ D(t) + R(t), ∀ t ∈ T
c_j(g_j(t)) ≥α_js g_j(t) + β_js, s=1,...C_j, ∀ j ∈ J
v_j(t-1) - v_j(t) + y_j(t) - z_j(t) = 0, ∀ j ∈ J, ∀ t ∈ T
g_j(t) - g_j(t-1) ≤ R_j^U v_j(t-1) + S_j^U y_j(t) ∀ j ∈ J, ∀ t ∈ T
g_j(t-1) - g_j(t) ≤ R_j^D v_j(t) + S_j^D z_j(t) ∀ j ∈ J, ∀ t ∈ T
∑_k=t-T_j^U+1, k ≥ 1^t y_j(k) ≤ v_j(t), ∀ t ∈ [ L_j +1 , ..., |T|], ∀ j ∈ J
v_j(t) + ∑_k=t-T_j^D+1, k ≥ 1^t z_j(k) ≤ 1, ∀ t ∈ [ F_j +1 , ..., |T|], ∀ j ∈ J
G_j v_j(t) ≤ g_j(t) ≤g_j(t) ≤G_j(t) v_j(t), ∀ j ∈ J, ∀ t ∈ T
g_j(t) ≤ g_j(t-1) + R_j^U v_j(t-1) + S_j^U y_j(t), ∀ j ∈ J, ∀ t ∈ T
g_j(t) ≤G_j[v_j(t) - z_j(t+1)] + z_j(t+1) S_j^D, ∀ j ∈ J, ∀ t ∈ T
where the optimization variables in set Ξ are p_j(t), p_j(t), v_j(t), y_j(t),
and z_j(t), ∀ j ∈ J, ∀ t ∈ T.
|
http://arxiv.org/abs/2409.03457v1 | 20240905121429 | FLAF: Focal Line and Feature-constrained Active View Planning for Visual Teach and Repeat | [
"Changfei Fu",
"Weinan Chen",
"Wenjun Xu",
"Hong Zhang"
] | cs.RO | [
"cs.RO"
] |
FLAF: Focal Line and Feature-constrained Active View Planning for Visual Teach and Repeat
*Corresponding author ([email protected])
Changfei Fu and Hong Zhang are with the Shenzhen Key
Laboratory of Robotics and Computer Vision, Southern University of
Science and Technology (SUSTech), and the Department of Electrical and
Electronic Engineering, SUSTech, Shenzhen, China. Changfei Fu and Wenjun Xu are also with the Peng Cheng National Laboratory, Shenzhen, China. Weinan Chen is with the Biomimetic and Intelligent Robotics Lab, Guangdong University of Technology, Guangzhou, China. This work was supported by the Shenzhen Key Laboratory of Robotics and Computer Vision (ZDSYS20220330160557001).
Changfei Fu, Weinan Chen, Wenjun Xu, and Hong Zhang*
Received Month dd, yyyy; accepted Month dd, yyyy
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
This paper presents FLAF, a focal line and feature-constrained active view planning method for tracking failure avoidance in feature-based visual navigation of mobile robots. Our FLAF-based visual navigation is built upon a feature-based visual teach and repeat (VT&R) framework, which supports many robotic applications by teaching a robot to navigate on various paths that cover a significant portion of daily autonomous navigation requirements. However, tracking failure in feature-based visual simultaneous localization and mapping (VSLAM) caused by textureless regions in human-made environments is still limiting VT&R to be adopted in the real world. To address this problem, the proposed view planner is integrated into a feature-based visual SLAM system to build up an active VT&R system that avoids tracking failure. In our system, a pan-tilt unit (PTU)-based active camera is mounted on the mobile robot. Using FLAF, the active camera-based VSLAM operates during the teaching phase to construct a complete path map and in the repeat phase to maintain stable localization. FLAF orients the robot toward more map points to avoid mapping failures during path learning and toward more feature-identifiable map points beneficial for localization while following the learned trajectory. Experiments in real scenarios demonstrate that FLAF outperforms the methods that do not consider feature-identifiability, and our active VT&R system performs well in complex environments by effectively dealing with low-texture regions.
VT&R, Active View Planning, Low Texture
§ INTRODUCTION
Learning to cruise a path while traversing it is a fundamental capability for mobile robots<cit.>. Considering that humans and vehicles mainly rely on various flexibly fixed paths to repeatedly shuttle between multiple locations, Teach and Repeat (T&R)<cit.> is an essential technique for robots to learn to navigate the paths that cover a major part of autonomous navigation requirements. This technique can support many robotic applications, such as household robots traveling between different rooms<cit.>, delivery robots taking goods from the logistics center to the target building<cit.>, and autonomous buses following a mostly fixed trajectory.
As a type of natural visual sensor, monocular cameras are cost-effective, energy-efficient, and versatile, making them suitable for a wide range of environments and applications. The T&R approaches that predominantly utilize visual sensors are referred to as Visual Teach and Repeat (VT&R)<cit.>, which is a significant motivation of the research in visual simultaneous localization and mapping (VSLAM)<cit.>. Our active VT&R system integrates a pan-tilt unit (PTU)-based active camera with feature-based VSLAM (see Fig. <ref>). During the teaching phase, the mobile robot reconstructs the surrounding landmarks while traversing a path under guidance<cit.>. In the repeat phase, the previously saved path map is reloaded to localize the robot for navigating the taught trajectory.
Although feature-based VSLAM systems<cit.> achieve impressive robustness and stability in large indoor and outdoor environments<cit.>, they all suffer from the view angle-dependent affine change<cit.> of features, and the map for real-time localization are usually too sparse for other complex applications<cit.>. Fortunately, the limited view-angle invariance of features and sparse map for reliable localization is enough for path following in VT&R. In this work, we achieved a high success rate and reliability in passive VT&R (using a fixed camera) with feature-based monocular VSLAM. However, textureless regions in human-made environments are still limiting VSLAM and VT&R to be used in the real world. To solve this problem, existing active SLAM works mostly choose to change the robot trajectory, which is not suitable for VT&R<cit.>. In order to use an active camera to actively select informative views without interfering in the T&R trajectory, our work focuses on designing an active view planning method with active camera-based VSLAM suitable for VT&R.
Our motivation is to design a feature-based VT&R system coupled with active view planning for tracking failure avoidance. Previous active camera-based VSLAM systems<cit.> have primarily focused on maintaining stable localization during the mapping process but have not demonstrated how to effectively reuse the map with active view planning for navigation tasks. The key difference between mapping and reusing the map is that there are more choices of map points when the map is complete during navigation, compared to the mapping process when the active camera has no choice but to orient to the newly built map points. As shown in Fig. <ref>, existing methods<cit.> that rotate the active camera to fixate on more map points that are beneficial for tracking may fail in some VT&R cases because they do not consider the feature identifiability<cit.>, which refers to the ability of the feature detector to identify a 3D map point. While the robot is moving forward in the direction indicated by the arrows (see Fig. 2), existing view planning methods mostly focus the active camera on the regions dense with map points. In many situations, these map points targeted by the active camera are triangulated by earlier keyframes taken from viewpoints significantly different from the current one, making them unidentifiable by the feature algorithms due to their substantially different visual appearances. To address this, we have designed an active camera-based VT&R system featuring an innovative active view planning method that accounts for the view angles of map points. Our FLAF-based active view planning is focal line-centric, as its direction dictates the angles of the active camera.
In this paper we present a visual navigation system with the PTU and a novel active view planning method for feature-based VT&R. The contributions of our work are summarized as follows: (1) We combine VSLAM, motion control in repeat, and feature-based active view planning to overcome the low-texture problem. To the best of our knowledge, this is the first demonstration of VT&R system coupled with active view planning. (2) In real-scene experiments of VT&R, we find the active view planner should be redesigned in the repeat phase because of the unidentifiable points in the already existing complete map. We identify that the focal line vector is key to evaluating the angles of a rotatable camera. (3) We propose an online focal line and feature (FLAF)-constrained active view planning method that rewards the camera orientations that observe more feature identifiable map points at their mean-view directions and is suitable for both phases of VT&R.
§ RELATED WORK
Our active VT&R is based on VSLAM system tightly coupled with active view planning for tracking failure avoidance.
§.§ Visual Teach and Repeat
The classical work of VT&R<cit.> using a feature-based stereo VSLAM was later developed as VT&R2<cit.>, which utilizes multiple taught experiences to address environmental appearance change. Despite the significant progress, this series of works all suffer from the tracking failure caused by low-texture regions<cit.> as they all rely on fixed cameras and feature-based VSLAM. Based on the well-established VT&R2<cit.>, Warren et al.<cit.> build a gimbal-stabilized VT&R system in which the gimbal is passively utilized to stabilize the camera or manually steered to avoid degeneracy in the teach phase. In the repeat phase, the camera actively rotates to the nearest keyframe in the taught graph. Although this work justified the necessity of an active camera in VT&R, the gimbal is manually steered in the teach phase instead of autonomous operation. Previous works designed various active view planning methods<cit.> for active camera-based VSLAM. However, the recently proposed uncertainty-driven view planning (UDVP)<cit.> that rotates the camera toward more nearby map points easily fails in the repeat phase for it ignores the affine change of features<cit.> (Fig. <ref>). To solve these problems, we implement the same autonomous view planning method (FLAF) in both phases of VT&R.
Faced with the aforementioned tracking failure and occlusion, Mattamala et al. <cit.> designed a VT&R system allowing the quadruped to switch between multiple cameras set on different positions of the robot. However, each one among the cameras faces the same problem of tracking failure. Every camera builds several sub-maps where completion and consistency are difficult to guarantee in sub-map merging without well dealing with the challenging views. Our proposed active VSLAM can help each one of the multiple cameras in<cit.> to improve the performance of its system. In the meanwhile, our VT&R system is more natural and compact to use fewer computing resources.
§.§ Visual Simultaneous Localization and Mapping
Our VT&R system consists of a 3D reconstruction module for the mobile robot to remember a traversed path. In the teach phase, the robot learns the landmarks in feature-based VSLAM. With the input of an image set that shares observations of the environment, the task of estimating camera motions and a geometrical reconstruction is called Structure From Motion (SfM)<cit.>. For a mobile robot with a moving video camera, a system that performs SfM for every image as it is captured is called real-time SfM or VSLAM. Specifically, our proposed active view planning method is designed according to the local map built by VSLAM systems<cit.>.
The most popular implementations of VSLAM<cit.> follow the principle of aligning the current image to the already-built map for camera localization. The data association between current frame and keyframes and bundle adjustment (BA) are conducted locally and globally to obtain a consistent map<cit.>. To use time-consuming bundle adjustment in SLAM for optimizing a consistent map, a hierarchical optimization strategy is proposed with the concept of window and local bundle adjustment<cit.>.
In <cit.>, a sophisticated local map is designed to align the current frame and implement the local BA. Motion estimation by aligning the current image to the local map is called local map tracking<cit.>. Deng<cit.> et al. identify a particular tracking failure caused by the incapability of associating enough features. In <cit.> the authors also indicate that the possibility of tracking failure approximates zero if the associated map points is more than a level.
§.§ Active View Planning for VSLAM
Seminal work of active view planning for VSLAM is <cit.>, in which Davison et al. indicates the solution of active vision in navigation is using serial views on a succession of features for stable localization and deciding the time to explore new features. Following this idea, <cit.> proposes to observe the unexplored regions and reset the active camera if features are not enough for localization. However, this strategy doesn't guarantee an accurate localization because the view direction same to the initial one doesn't capture the same initial image as the robot is moving.
In active camera-based VSLAM, the primary problem before the exploration of new features is trying its best to maintain a stable and accurate localization<cit.>. In <cit.> the authors propose the UDVP method to measure the quality of the camera angles with respect to every map point. This UDVP model shows fine performance in capturing more existing map points in the Field of View (FoV) of the camera. However, it (shown in Fig. <ref>) doesn't consider the feature identifiability in the repeat phase which results in tracking failure by looking at unrecognizable map points. Our FLAF method uses the observation model shown in Fig. <ref>, which utilizes the angle between the camera focal line and the line of sight to a map point to make the camera capture more built map points. In the meanwhile, the angle between the imaging light path of the point and the normal line of the map point is measured to qualify the feature identifiability.
The idea to consider both the angles of α_1 and α_2 shown in Fig. 3 was first described in the VSLAM system of<cit.> for feature selection, which serves as the foundation for our system. This strategy is also utilized in the active SLAM method presented in <cit.>, which employs a fixed camera for both navigation and exploration. The model in <cit.> defines specific ranges of distance and α_2 to screen map points within the camera's frustum. The "FLAF without scoring" method in our comparison experiments can be seen as an implementation of active VT&R using the model from <cit.>, despite the differences in the robot and task. It is noteworthy that Mostegel et al.<cit.> identified several metrics for feature recognition and validated the effectiveness of using a cosine function to measure the feature recognition probability, which indicates the likelihood of feature identification from different view angles.
§ APPROACH
We build our VT&R system on an active camera-based VSLAM framework shown in Fig. <ref>, which integrates ORB-SLAM2<cit.> and our FLAF-constrained active view planning. Our method for active camera-based path following is similar to that described in <cit.>. During the construction of this navigation system, we observed frequent failures in the repeat phase when feature identifiability was not considered. The core idea of our formulation is to rotate the camera toward optimal directions relative to the map points, thereby improving the performance of the feature detector by taking into consideration the feature identifiability of these map points.
To increase the number of map points in the central area of the field of view, we designed a scoring function based on cos(α_1) to rotate the camera toward regions dense with map points. Each map point within the FoV contributes to the total of this scoring function. Map points with smaller α_1 values receive higher scores, which encourages the clustering of map points in the central area of the field of view. Consequently, camera poses that capture more map points with smaller α_1 in their FoV achieve higher scores. This strategy is similar to the UDVP model<cit.>, which uses a sigmoid function with higher computation complexity.
Another goal of our design is to score the map points with good feature identifiability relative to the current camera pose. Feature identifiability refers to the ability of the feature detector to recognize the map point from a specific position and orientation. With the path map built, the VT&R system can determine the existence of a map point. But if the robot rotates the camera toward this map point from an arbitrary position, there is a probability<cit.> that this map point may not be detected in the current image due to affine changes<cit.>. In the teach phase, only those map points observed and associated with several different keyframes are retained in the map. We assume the keyframes that observe the same map point define a view distribution within which this map point can be identified. The mean viewing direction is considered the normal of the map point, around which the view distribution is defined. This definition of the map point normal was also used in <cit.>. Regarding the treatment of the normal of a feature, Mostegel et al.<cit.> justified using cosα_2 as the metric of the probability of feature identification to account for the observation of a feature from different viewpoints and view angles. Our scoring function, cos(α_1)·cos(α_2), multiplies these two cosine functions to prioritize map points that score highly on both metrics.
§.§ VSLAM with Active Camera
While the mobile robot is moving on a path, images with equally spaced timestamps are sequentially captured by the moving camera and sent to the VSLAM system. For every input image 𝐈_k:ℝ^2→ℝ, ORB features<cit.> are extracted and sent to the local-map tracking module, in which features are aligned with the local map to estimate current camera pose 𝐗_k,w∈SE(3) in world coordinate. This process of computing a camera pose according to the local map is called local map tracking. If enough new features are found, this frame is decided as a keyframe and these features are triangulated into the map based on data association between local keyframes. This process of projecting new keyframes and features into the map space (world coordinate) is called local mapping.
A local map including a network of keframes {F_i} connected according to feature matching and thier associated map points {𝐏_j^i|𝐏_j^i∈ℝ^3,j=0,1,2,...,m_i} are denoted with:
M_l={𝐏_0^i,𝐏_1^i,...,𝐏_m_i^i,F_i|i=0,1,2,...,n_i}
where m_i is the quantity of the map points associated with the i-th keyframe and n_i is the quantity of the keyframes in the local map. A map point only refers to a 3D coordinate in map space, but its descriptor can be computed from the keyframe by which the point is triangulated. According to the descriptor matching, keyframes share observations with 𝐈 to make up the local map with their associated map points.
For any map point 𝐏_j^i in M_l visible frustum of current camera pose can be matched with an ORB feature at 𝐩_j^i∈ℝ^2 on 𝐈_k, we reproject it onto the 𝐈_k by pinhole camera model π:ℝ^3→ℝ^2 and get the reprojected coordinate π(𝐏_j^i)∈ℝ^2. Because of the mapping and localization uncertainty, there exists an error 𝐞_i,j∈ℝ^2 of the j-th map point in i-th keyframe:
𝐞_i,j=𝐩_j^i-π(𝐏_j^i)
But not all points of M_l are within visible frustum of the camera and can be matched with an ORB feature in 𝐈_k. We collect any map point 𝐏_j^i in M_l which can be identified by current view as a set S={𝐏_j^i|𝐏_j^i can be identified by 𝐈_k}. Then we conduct the local map tracking by minimizing the cost function<cit.>:
𝐗_k^*=𝐗_kmini,j,
𝐏_j^i∈ S∑𝐞_i,j^𝖳Ω_i,j^-1𝐞_i,j
where 𝐗_k^*∈SE(3) represents the pose estimation of 𝐈_k. The magnitude of map points within the visible frustum of the camera that can be matched with an ORB feature in 𝐈_k is represented as:
N_S=f(𝐗_k,M_l)
to indicate that N_S is a function of current camera pose and the local map.
According to the results in <cit.>, the pose estimation shown by Equation (<ref>) fails if the magnitude of S is below a threshold. Therefore, <cit.> proposes a view planner to increase N_S. To achieve this idea by our sampling-based optimization, the pan-tilt angles of the PTU are sampled as 𝐪=(pan,tilt) and transformed into 𝐓_pt(𝐪)∈SE(3) to rotate the camera to 𝐗_k', which is directly evaluated:
𝐗_k' = 𝐓_pt(𝐪) ∗𝐗_k
§.§ Active View Planning for Feature-based VT&R
In the classical feature-based VSLAM<cit.>, on which we build our active camera-based SLAM system, the next frame is typically decided by the trajectory of the robot. To avoid low-texture views, we make passive VSLAM tightly coupled with a feature-based active view planning module. Specifically, we insert (Fig. <ref>) the active view planning module between the local map tracking and the local mapping because they are separated into different threads in implementation. After local map tracking, we have the local map and current camera pose needed in active view planning. Based on the local map and current camera pose, the next best angles of the PTU can be decided. Thus we achieve using current partial perception results as feedback to control the sensor and image input. This process reflects the concept called “active vision” in the robotics community.
Using the UDVP observation model<cit.>, we control the active camera to see more and closer map points. Given a map point in the local map, the UDVP<cit.> model awards the PTU angles with a smaller distance to it and a smaller view angle between the line of sight to it and the camera's focal line. In the teach phase, the active camera always looks at the just-built partial map. In this case, it's easy to extract ORB features and match them with those in past keyframes because the just-built map points conform to above criteria of UDVP model. Thus N_S=f(𝐗_w) is increased and tracking failure probability is decreased compared to passive VSLAM. This effect can be represented by the inequality:
f(𝐓_pt𝐗_k) > f(𝐗_k)
However, in the repeat phase, a complete map of the taught path is built. Although there exist more local map points that are close to the optical center and have a small angle between sight line OP and focal line n_c (α_1 shown in Fig. <ref>), some of the points are not identified by the feature matcher because of a big angle between the focal line and mean view angle of the map point. This situation results in the higher failure rate of repeat. This problem can be summarized as “looking at points visible but not identifiable makes the tracking fail”. We draw Fig. <ref> to explain the particular VT&R failure.
To solve this problem, we use the observation model proposed in <cit.> shown in Fig. <ref>. One distance and two angles are measured for every pan-tilt sample and map point to support our design principles:
* Scoring the pan-tilt sample that makes the map point in the range of (d_1, d_2) which is the scale invariance range of the image pyramid.
* Scoring smaller α_1(𝐓_pt𝐗_k, 𝐏_j^i) shown in Fig. <ref> which refers to the angle between camera's focal line n_c and ray line OP of the point.
* Scoring reward smaller α_2(𝐗_k, 𝐏_j^i) shown in Fig. <ref> which refers to the angle between mean view line n_p and ray line OP of the point.
According to these three principles, we evaluate every pan-tilt sample with M_l. At first, we eliminate the points that have a distance out of the range (d_1, d_2) or have an angle α_2 > 60^∘. The rest of the points in the local map make up a collection denoted S_r. For every map point 𝐏_j^i∈S_r, we calculate the score of a pan-tilt sample 𝐪 and optimize it by:
𝐪^*=𝐪maxi,j,
𝐏_j^i∈S_r∑cos(α_1)cos(α_2)
The maximum of the cosine functions is achieved when the angle is 0, which indicates that the best view angle is obtained when the focal line of the camera and the mean view angle of the map point overlap.
§.§ Path Learning and Tracking with Active Camera
When the human guide finishes teaching the path, a complete map consisting of plenty of 3D map points and a graph of keyframes are saved by the system. The keyframes with timestamps and poses contain trajectory information of the active camera on this path. In the repeat phase, the map points are used to construct a local map according to the current pose of the camera. Every time a keyframe is stored in memory, the PTU sample is read from the angle encoder and saved with the same timestamp. In summary, the mobile robot is taught with a feature map and a camera trajectory consisting of keyframes with timestamps and corresponding PTU angles. Then the robot trajectory is computed from PTU samples and camera trajectory by inverse operation of Equation (<ref>):
𝐗_k = 𝐓_pt^𝖳𝐗_k'
When the mobile robot is needed to navigate the learned path, the feature map taught in the teach phase is loaded at first. Then PnP algorithm<cit.> is used to find the initial pose of the robot. Once the robot has found its position in the map, the active camera-based SLAM shown in Fig. <ref> is booted up to run local map tracking for current pose 𝐗_k,w. Then we search the Keyframe poses for the closest reference pose X_r,w∈SE(3) in front of the robot. Finally the pose error between 𝐗_k,w and 𝐗_r,w is transferred into a PD controller<cit.> C_pd to compute the current velocity ϕ_k:
ϕ_k = C_pd(𝐗_r,w-𝐗_k,w)
To speed up the searching of a reference keyframe, we define a window with a width of 10 that is centered at the last reference keyframe. In the repeat phase, the computer keeps outputting the motion calculated by Equation (<ref>) to the wheels.
§ EXPERIMENTS AND DISCUSSION
In both phases of VT&R, the active camera runs automatically according to current perception. In the teach phase, the robot runs a path following a human guide and automatically learns the path using the active camera-based VSLAM. In the repeat phase, the robot is placed anywhere near the road in a similar orientation to the taught one. Once started, the robot processes the images input by the active camera to localize and control the robot. It needs to be emphasized that the main purpose of our experiments is to demonstrate repeatable and successful VT&R on the difficult paths with our system. As the trajectory error in the repeat phase are all acceptable, we emphasize the results of completion rate, which indicates that only our FLAF-based Active VT&R can finish all three paths. It is worth noting that the UDVP-based active VT&R always fails in the same position and for the same reason in our repeated experiments.
Experiments are performed on three paths to evaluate our active view planning method and VT&R system. The first two paths are in the effective range of the motion capture device in the Shenzhen Key Laboratory of Robotics and Computer Vision. Path three leads the robot from an indoor location to a reading space out of the laboratory and finally back to the start. Experiments on each of the three paths are performed 10 times for consistent results. All the data in Table I are the average results of 10 repeated experiments. The trajectories shown in Fig. 5 and the point graph shown in Fig. 7 are a representative selection, considering that the traditional methods always fail in the same situation. In Fig. 6, we demonstrate the process of experiments on the most difficult path connecting different rooms.
We compare our FLAF-based active view planning with three other methods on the task of VT&R. "Passive VT&R" is achieved with our VT&R system without the PTU and active view planning. The "UDVP-based active VT&R" is a reproduction of the observation model proposed in <cit.> with our VT&R system. The only difference between FLAF and UDVP is the observation model. The equation(<ref>) is supported by the comparison of "FLAF-based active" with "FLAF by counting," which is the first version of FLAF that has been eliminated by us.
§.§ Implementation and Experimental Setup
As shown in Figure <ref>(d), we fix an Intel Realsense D435 camera on an I-Quotient-Robotics PTU to make up our active camera. This active camera is mounted on a Clearpath-Jackal robot. Our active VT&R system runs in real-time on a notebook with an Intel i7 (2.3GHz) CPU and responds to the images exactly at the frame rate of 20Hz.
Ground truths of path one and two are obtained from motion capture in the teach phase. The ground truth of Path 3 is built by SfM, as there is no motion capture outside the laboratory. The visual repeating trajectories of all methods are compared with the ground truths and painted in Fig. 5. In Table I, the CR data shown indicates the VT&R completion rate of this view planning method, and the time data indicates the mean time used for the view planning methods implemented by sampling-based optimization. The efficacy of VSLAM is decreased by the relatively low speed of view planning compared to the SLAM speed of 20-30Hz. The translational RMSE of trajectories is computed using evo<cit.> by comparing the repeat trajectory with the taught one based on timestamps, resulting in greater error than physical truth. The trajectory consistencies between repeat and teach shown in Fig. <ref> are similar and all acceptable.
On path one and two, we demonstrate the efficacy of our VT&R system in both an active and a passive way. On path three (Fig. <ref>), we show a challenging case with low-texture regions where passive VT&R fails and active VT&R succeeds. Additionally, our FLAF model is verified on all three paths to outperform the existing UDVP in repeating a complete path. FLAF by counting refers to counting the map points in a range defined by FLAF instead of grading the points by the product of the cosine functions shown in Equation (<ref>).
§.§ Tracking Failure Avoidance Validation
As shown in Table <ref> and Fig. <ref>, the passive VT&R achieves a stable and accurate performance on the first two paths but fails in the teach phase on path three. The few low-texture regions on path one and two are avoided by a considerable human guide. Our active VT&R system with active view planning succeeds in the teach phase on three paths.
Fig. <ref> demonstrates the situations where low-texture regions are overcome by our active VT&R with an active camera that automatically captures informative regions to maintain a stable localization. At position one, the active camera automatically looks at the poster in the upper left to avoid the white wall. At position two, the active camera looks up at the ceiling to maintain the localization relying on the square lamps. The robot looks toward the upper right at position three to focus on the logo while passing through a low-texture corner. Finally, at position four, the robot looks toward the upper left at the scroll on the door for abundant features.
§.§ Active View Planning Method Evaluation
Active SLAM with different observation models typically succeeds in the teach phase on all paths. Because they mostly control the robot to see the partial map just built. This is achieved by looking at the directions that have more mapped features. However, existing methods such as UDVP<cit.> ignore the identifiability of the points by feature extractor and matcher. In the mapping phase active SLAM has no choice but to see the just mapped points because there are no more feature points triangulated. However, in the repeat phase, some of the map points (3D coordinates) can't be identified by VSLAM from a large view angle compared with the view direction in which they were triangulated.
Our FLAF-constrained active view planning outperforms the UDVP model in repeating a complete path because of our consideration of the affine change of the feature points. The map point's normal line (𝐧_p) shown in Fig. <ref> limits the orientation of the active camera in the view angle-invariance range of the feature. Table I shows that our method repeats a more complete trajectory on all paths, especially in some challenging cases. Although on path three UDVP achieves more accurate path tracking, the trajectory errors are all negligible for VT&R. The trajectories shown in Fig. <ref> also verify that the repeat consistency with different methods are all pretty good, which means the repeat trajectories always fit with the taught one. Part of the errors are caused by the time-wise point-to-point comparison. The timestamps are difficult to align. We just modified them to be uniformly consistent at the same time duration. Initially, we implement our FLAF model by counting the map points in the FoV of a PTU sample that accords with FLAF, which is more accurate for the higher speed. Only the final version of our FLAF-constrained active VT&R successfully finishes all three paths.
§.§ Map Points Association Validation
In<cit.>, the authors draw a point graph to show the relationship between the probability of the tracking failure and the number of observed map points. This work<cit.> indicates that the probability of the tracking failure approaches zero if the observed map points are more than a threshold. Our work further indicates that the tracking failure is caused by failed local map tracking. We use the analyzing method proposed in<cit.> to analyze the state of map point association in the repeat phase. As shown in Fig. <ref>, we recorded the quantity of matched points in local map tracking and drew the line graph of two methods for all three paths.
On path one, the active camera-based VSLAM with FLAF matches fewer points than it is by UDVP initially. However, the VSLAM with UDVP fails to track the path after a reduction of matched points. On the contrary, the VSLAM with our proposed FLAF-based method initially tracks fewer points of the local map but maintains a stable localization and increasing number of matches. On path three, the active visual repeat with these two view planning methods tracks a similar quantity of map points, but our method recovers the localization after a challenging decline where the UDVP-based method fails.
On path three, a similar case happens that the UDVP method loses the localization after a reduction of matched map points. In the same challenging location, our FLAF-based method finishes repeating the path by adjusting the active camera to the direction with more identifiable map points. Fig. <ref> also shows that the UDVP method controls the active camera to see the direction with more local map points without considering the feature identifiability of these points. It means although these points are in the FoV of the camera, they can not be identified and matched by the feature extractor and matcher because of the ignorance of the affine change of the points.
§ CONCLUSION
§.§ Conclusions
In this research, we present a novel active view planning method for VT&R to solve the tracking failure caused by low-texture regions. Our passive VT&R system is built on feature-based VSLAM and faces the main problem of tracking failure caused by low-texture regions. Existing feature-based active VT&R systems use a gimbal-stabilized camera to improve the stability. The gimbal is typically manually steered in the teach phase instead of using active view planning. Previous works propose some active view planning methods for VSLAM. However, these works haven't demonstrated how to use these methods to help VT&R with an active camera.
We aim to use an active camera with active view planning to build up an active VSLAM system that can support our active VT&R system. After our test, a recent active view planning method for VSLAM succeeds in the teach phase but easily fails in the repeat phase. This method controls the camera toward directions with more points close to the camera. However, it fails to track the local map in cases when the closer points are not identifiable by the ORB feature extractor and matcher. Therefore, we design a focal line and feature (FLAF)-constrained active view planning method taking into account the affine change of feature points.
According to our experimental results, passive VT&R fails with the low-texture regions. Active camera-based VSLAM with an existing view planning method easily fails in the repeat phase because it doesn't account for view angle-dependent affine change of the features. With our proposed FLAF model, the novel active view planning method successfully helps our VT&R system finish all three paths. Our experimental results also demonstrate the process of overcoming the low-texture regions with an active camera. Finally, we analyze the reason for tracking failure through three graphs that show the relationship between the quantity of matched points and time.
Our most important contribution is the demonstration of active camera-based VT&R. We focus on solving the problem of tracking failure in VT&R caused by low-texture regions. We will address the dilemma between exploitation of the existing map and exploration of the unknown environment in the future.
00
b1 Paul Furgale, and Timothy D Barfoot, “Visual teach and repeat for long-range rover autonomy,” Journal of Field Robotics, 2010.
b2 Michael Paton, Kirk MacTavish, Michael Warren, Timothy D Barfoot, “Bridging the appearance gap: Multi-experience localization for long-term visual teach and repeat.” In IEEE/RSJ Int. Conf. on Intell. Robots and Systems, 2016.
b3 L. Peterson, D. Austin, and D. Kragic, "High-level control of a mobile manipulator for door opening”, In IEEE/RSJ Int. Conf. on Intell. Robots and Systems, 2000.
b4 Mohit Mehndiratta and Erdal Kayacan, "A constrained instantaneous learning approach for aerial package delivery robots: onboard implementation and experimental results," Autonomous Robots, 2019.
b5 Guillaume Bresson, Zayed Alsayed, Li Yu, and Sébastien Glaser, “Simultaneous Localization and Mapping: A Survey of Current Trends in Autonomous Driving,” IEEE Trans. Intell. Vehicles, 2017.
b6 H Ye, G Chen, W Chen, L He, Y Guan, and H Zhang, "Mapping While Following: 2D LiDAR SLAM in Indoor Dynamic Environments with a Person Tracker," In IEEE Int. Conf. on Robot. and Biomimetics, 2021.
b7 Jakob Engel, Vladlen Koltun, and Daniel Cremers. "Direct sparse odometry." IEEE Trans. Pattern Anal. Mach. Intell, 2017.
b8 Christian Forster, Matia Pizzoli, and Davide Scaramuzza. "SVO: Fast semi-direct monocular visual odometry." In IEEE Int. Conf. on Robotics and Automation, 2014.
b9 R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “Orb-slam: a versatile and accurate monocular slam system,” IEEE Trans. Robotics, 2015.
b10David G Lowe. "Distinctive image features from scale-invariant keypoints." International journal of Computer Vision, 2004.
b11 Seyed Abbas Sadat, Kyle Chutskoff, Damir Jungic, Jens Wawerla and Richard Vaughan, "Feature-Rich Path Planning for Robust Navigation of MAVs with Mono-SLAM," In IEEE Int. Conf. on Robotics and Automation, 2014.
b12 Xinke Deng, Zixu Zhang, Avishai Sintov, Jing Huang, and Timothy Bretl, "Feature-constrained Active VSLAM for Mobile Robot Navigation," In IEEE Int. Conf. on Robotics and Automation, 2018.
b14 Matias Mattamala, Milad Ramezani, Marco Camurri, and Maurice Fallon. "Learning camera performance models for active multi-camera visual teach and repeat." In IEEE Int. Conf. on Robotics and Automation, 2021.
b15 Simone Frintrop, and Patric Jensfelt. "Attentional landmarks and active gaze control for VSLAM." IEEE Trans. Robotics, 2008.
b16 Andrew J. Davison and David W. Murray. "Mobile robot localisation using active vision." In European Conference on Computer Vision, 1998.
b17 Xu-Yang Dai, Qing-Hao Meng, and Sheng Jin. "Uncertainty-driven active view planning in feature-based monocular vSLAM." Applied Soft Computing, 2021.
b18 Michael Warren, Angela P. Schoellig, and Timothy D. Barfoot. "Level-headed: Evaluating gimbal-stabilised visual teach and repeat for improved localisation performance." In IEEE Int. Conf. on Robotics and Automation, 2018.
b19 Hauke Strasdat, José MM Montiel, and Andrew J. Davison. "VSLAM: why filter?" Image and Vision Computing, 2012.
b20 Weinan Chen, Changfei Fu, Lei Zhu, Shing-Yan Loo, and Hong Zhang. “Rumination Meets VSLAM: You Do Not Need to Build All the Submaps in Realtime.” IEEE Trans. Industrial Electronics, 2023.
b21 Etienne Mouragnon, Maxime Lhuillier, Michel Dhome, Fabien Dekeyser, and Patrick Sayd." Real time localization and 3d reconstruction." In IEEE Int. Conf. on Computer Vision and Pattern Recognition, 2006.
b22 Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. "ORB: An efficient alternative to SIFT or SURF." In IEEE Int. Conf. on Computer Vision, 2011.
b23 Vincent Lepetit, Francesc Moreno-Noguer, and Pascal Fua. "EPnP: An accurate O(n) solution to the PnP problem." International Journal of Computer Vision, 2009.
b24 Weinan Chen, Lei Zhu, Xubin Lin, Li He, Yisheng Guan, and Hong Zhang. "Dynamic strategy of keyframe selection with pd controller for vslam systems." IEEE/ASME Trans. Mechatronics, 2021.
b25 Michael Grupp. "evo: Python package for the evaluation of odometry and slam." 2017, Available: https://github.com/MichaelGrupp/evo.
b26 Christian Mostegel, Andreas Wendel, and Horst Bischof. "Active monocular localization: Towards autonomous monocular exploration for multirotor mavs." In IEEE Int. Conf on Robotics and Automation, 2014.
|
http://arxiv.org/abs/2409.02680v1 | 20240904130803 | A Low-Cost Real-Time Spiking System for Obstacle Detection based on Ultrasonic Sensors and Rate Coding | [
"Alvaro Ayuso-Martinez",
"Daniel Casanueva-Morato",
"Juan Pedro Dominguez-Morales",
"Angel Jimenez-Fernandez",
"Gabriel Jimenez-Moreno"
] | cs.RO | [
"cs.RO",
"cs.NE"
] |
[1]Alvaro [email protected]
1]Daniel [email protected]
1]Juan Pedro [email protected]
1]Angel [email protected]
1]Gabriel [email protected]
2]Fernando Perez-Peñ[email protected]
*[1]Computer Architecture and Technology, Universidad de Sevilla, Av. de la Reina Mercedes, s/n, Sevilla, 41012, Andalucia, Spain
[2]Automation, Electronics, Architecture and Computer Networks, Universidad de Cádiz, Av. Universidad de Cádiz, 10, Puerto Real, 11519, Andalucia, Spain
Since the advent of mobile robots, obstacle detection has been a topic of great interest. It has also been a subject of study in neuroscience, where flying insects and bats could be considered two of the most interesting cases in terms of vision-based and sound-based mechanisms for obstacle detection, respectively. Currently, many studies focus on vision-based obstacle detection, but not many can be found regarding sound-based obstacle detection. This work focuses on the latter approach, which also makes use of a Spiking Neural Network to exploit the advantages of these architectures and achieve an approach closer to biology. The complete system was tested through a series of experiments that confirm the validity of the spiking architecture for obstacle detection. It is empirically demonstrated that, when the distance between the robot and the obstacle decreases, the output firing rate of the system increases in response as expected, and vice versa. Therefore, there is a direct relation between the two. Furthermore, there is a distance threshold between detectable and undetectable objects which is also empirically measured in this work. An in-depth study on how this system works at low level based on the Inter-Spike Interval concept was performed, which may be useful in the future development of applications based on spiking filters.
A Low-Cost Real-Time Spiking System for Obstacle Detection based on Ultrasonic Sensors and Rate Coding
[
September 9, 2024
======================================================================================================
§ INTRODUCTION
Obstacle detection and avoidance has been a topic of interest in the field of robotics since the advent of mobile robots more than fifty years ago <cit.>. When talking about autonomous navigation, two main common problems arise: firstly, when the aim of the robot is to reach an end point starting from an initial point, it has to find a way to avoid obstacles that may exist on its path in an optimal way, something known as path planning. Secondly, mobile robots always have to deal with the appearance of unexpected obstacles that may cross their path, something that in real applications is essential to guarantee the safety of the robot and, in the case of that robot being a vehicle, also of its passengers.
The task of detecting obstacles is not an easy one. Its accuracy usually depends on the shape of the obstacle to be detected and, as mentioned in <cit.>, it involves sensor characteristics and known problems, and environmental conditions. In <cit.>, the most commonly used methods for obstacle detection in intelligent ground vehicles are also collected and compared according to relevant characteristics such as detection range, robustness and cost, mentioning the main problems encountered in each of them.
In this way, sensors most commonly used for this task can be summarized in four types: SONAR, LIDAR, RADAR and cameras. Most vehicles do not use a single type of sensors, but different types of them. This sensory fusion makes it possible to solve or smooth the known problems of each of the sensors used and to exploit new advantages, in exchange for a certain added computational cost. Although there are many examples of this sensory fusion, one of the most interesting is the following: one of the major problems of LIDAR arises when the object to be detected is a translucent object, since there is no reflection of the emitted light similar to that produced with opaque objects. However, this problem does not exist in SONAR because it uses sound waves for sensing. Thus, it could be said that the two can complement each other to form a fairly robust obstacle detection method. In this way, there are many works in which multiple sensors are used for obstacle detection <cit.>.
From a biological point of view, it is obstacle detection and collision avoidance that allow animals to navigate complex environments, which is necessary to perform other vital tasks such as foraging for food or escaping from a predator. Flying insects are the most studied to understand how these functions are performed in biology, due to their high precision and speed in avoiding these obstacles, even at night or in poorly lit environments. Although active sensors are generally used in robotics to carry out these tasks, the reality is that these flying insects perform these tasks using mainly vision <cit.>, something that can be extended to mammals and other animals. In <cit.>, it is discussed without going into much biological detail how the human visual system is able to focus attention on regions of interest in a visual scene as a function of the objects recognized in that region, which is directly related to the task of obstacle detection and, furthermore, how mammals are able to navigate complex environments. Other works provide greater neuroscientific knowledge about how navigation occurs in mammals <cit.>.
However, although it might seem that the problem of navigation and avoidance of objects in the environment is already solved thanks to vision, the truth is that this is not always the case. When light conditions are particularly poor, vision is no longer a possibility and animals have to resort to other mechanisms to perform this task. The best known of these consists of emitting high-frequency sounds, also known as ultrasounds, and measuring the time it takes for the echo to return. In this way, an object will be closer to the emitter the shorter this time is. This mechanism, commonly referred as echolocation, is used by many animal species, where bats, dolphins and toothed whales are just some of the most interesting examples. These ultrasounds can also be used to extract extra information; for example, it is known that some types of bats use them to classify insects based on frequency patterns in the echoes <cit.>. Some other works study how bats are able to produce ultrasounds and process their echoes <cit.>.
In recent years, many works have focused on vision to design neuromorphic applications, i.e., bio-inspired applications with the aim of mimicking the biological behavior of animals <cit.>, in which the obstacle detection task is performed using bio-inspired vision sensors <cit.>. Thus, vision seems to be the preferred sense for the design of bio-inspired systems and algorithms for obstacle detection, while sound-based systems usually focus on other tasks such as sound classification and localization or speech recognition <cit.>.
However, it is also possible to find some works in the literature that focus on sound to perform this task. The most interesting is <cit.>, in which a SNN capable of performing the obstacle detection task in a mobile robot by using two range sensors, which could be ultrasonic sensors, is presented. This paper highlights the implementation of a spiking application that allows the mobile robot to navigate autonomously; however, the obstacle detection task is not performed in a purely spiking manner, since it is performed based on the digital comparison of two values (the current distance and the threshold distance), which triggers the activation of a specific sensory neuron in the network.
On the other hand, the implementation of a spiking application purely based on SNN for obstacle detection is quite interesting to exploit the advantages of this bio-inspired paradigm, which are mainly low power consumption and high real-time capability. Real-time capability is a critical point in the development of robotic applications, since it determines how fast a robot is able to interact with its environment in a deterministic way. Thus, greater real-time capability translates into a greater ability to react to the appearance of unexpected obstacles in a robot's path.
In this work, a purely spike-based obstacle detection system is proposed, studied and implemented. This system focuses on encoding digital information into spiking information and processing it for the development of an obstacle detection application in the field of robotics.
Ultrasonic sensors are used in this case mainly for two reasons: firstly, ultrasonic sensors provide a very cheap and simple alternative for the development of robotic applications; secondly, this type of sensor allows experiments to be carried out with a certain similarity to how echolocation occurs in animals, as explained above, which may be particularly useful for future work to achieve a more bio-inspired approach.
The main contributions of this work include the following:
* Development of a purely SNN-based obstacle detection system
* Low-level analysis of the implemented SNN and the encoded information
* The code used in this work is publicly-available in a GitHub repository and has been released under a GPL license[<https://github.com/alvayus/spiking_rtod>]
The rest of the paper is structured as follows: in Section <ref> information regarding software and hardware materials used in this work is given; Section <ref> explains how these materials are interconnected in the global system architecture to perform obstacle detection; in Section <ref> the design of the implemented SNN is shown and explained; Section <ref> details different experiments carried out to test the performance of the implemented system; in Section <ref> the results obtained from experimentation and high-level details are discussed in depth; finally, the conclusions of the work are presented in Section <ref>.
§ MATERIALS AND METHODS
This section presents the hardware and software components used for the development of the complete system. The most relevant details of each of them are shown in the following subsections.
§.§ Robotic platform
In this work, a robotic platform consisting of different elements was used, including a Romeo BLE control board, an Adafruit HUZZAH32 board and an HC-SR04 ultrasonic sensor.
The Romeo BLE board from DFRobot is defined as an Arduino-based all-in-one control board specially designed for robotics, which stands out for the possibility of being programmed as if it were an Arduino Uno board and for the integration of Bluetooth 4.0. However, in order to use the latter feature, a special USB adapter, called USB Bluno Link, is required.
Since this adapter was not available, this advantage could not be exploited in this work, and a board for wireless data communication was added to the system. This board is the HUZZAH32 from Adafruit, an ESP32-based board that supports both Bluetooth technology (both classic and BLE) and WiFi and is also programmable via the Arduino IDE, although thanks to external libraries.
Finally, the HC-SR04 ultrasonic sensor is used to measure the distance to the nearest object within the measurement range, which will be used to determine whether it is close enough to be considered an obstacle or not. This sensor is a cheap alternative for ultrasonic distance measurement, and theoretically allows distances between 2 cm and 450 cm to be measured with an accuracy in the order of millimeters (±3 mm). It uses two transducers, one of which emits 8 pulses at a frequency of 40 KHz and the other of which receives the echoes produced by these pulses. This sensor allows to measure the time that has elapsed since these pulses were emitted until they were received in order to make a subsequent conversion to distance using the speed of sound propagation.
The robot is equipped with two different types of power supplies, where one of them powers the motors that allow the movement of the wheels and the Romeo BLE board and the other is destined to power the Adafruit HUZZAH32 board.
§.§ SpiNNaker
SpiNNaker is a massively-parallel multi-core computing system that was designed to allow modeling very large SNN in real time and whose interconnected architecture is inspired by the connectivity characteristics of the mammalian brain <cit.>.
In this work, a SpiNN-3 machine has been used for the simulation of the SNN designed for this work, which can be found in Section <ref>. It has 4 chips, each of them having eighteen ARM968E-S cores operating at 200 MHz. The capacity of the number of neurons that can be computed at the same time during a simulation is slightly limited in this version, but it is more than enough for this work thanks to the low use of resources in the development of the SNN. More details about this platform can be found in <cit.>.
§.§.§ Spiking Neural Networks
There are currently considered to be three different generations of ANN: classical ANN, DNN and Spiking Neural Networks (SNNs). Thus, SNN are considered the third generation of ANN. All ANN can be viewed as graphs in which the nodes and edges represent neurons and synapses, respectively. This structure based on artificial models of neurons and synapses, with a high level of abstraction in the case of classical ANN and DNN <cit.>, is inspired by the biological nervous system. Neuron models used in SNN are intended to be as close as possible to the functioning of the biological neurons that can be found in that system, and therefore these SNN are considered to be the closest type of ANN to their biological counterpart <cit.>.
One of the most important aspects of SNN is how information is transmitted through the neural network. In the biological nervous system, information is transmitted in the form of asynchronous electrical impulses called spikes, which are large peaks in the membrane potential of biological neurons that occur when the membrane potential exceeds a threshold potential. When these spikes occur in a neuron, they are propagated to all neurons connected to it through synapses, causing or not the generation of new spikes in the target neurons, and so on.
This behavior makes SNN more complex than the rest of ANN. However, the encoding of information in spikes makes them more energy-efficient as they have to deal with precise timing, which translates into a low computational cost and, therefore, a low power consumption. Some improvements in the hardware implementation of SNN, such as avoiding multiplications, processing spikes using shifts and sums, and only transmitting single bits of information instead of real numbers, allow achieving real-time execution <cit.>.
§.§.§ Live injector
SpiNNaker supports real-time spike injection, i.e., data injection during simulation using a special type of neuron called spike injector[<https://spinnakermanchester.github.io/2015.004.LittleRascal/InjectingDataRealTime.html>]. It is possible to tell this neuron when to emit a spike in real time. This mechanism is particularly interesting for the development of bio-inspired robotic applications in which it is necessary to convert the information obtained by using digital sensors to spikes so that the neural network can process it.
§.§ Software
The code of this work has been developed using Python for the computer and SpiNNaker, while Arduino has been used for the robotic platform. PyNN <cit.>, a Python package for the simulator-independent specification of neuronal network models, and sPyNNaker <cit.>, an additional Python package which is required to work with PyNN and the SpiNNaker hardware platform, are also used in the computer. Currently, PyNN supports NEURON <cit.>, NEST <cit.> and Brian <cit.> as neural network software simulators, as well as SpiNNaker <cit.> and BrainScaleS neuromorphic hardware systems. PyNN 0.9.6 and sPyNNaker 6.0.0 are used in this work.
In relation to how the experiments have been carried out to check the correct functioning of the implemented system, Matplotlib 3.6.0 has been used for the representation of graphics.
§ SYSTEM DESCRIPTION
This section describes in depth how the materials presented in Section <ref> are interconnected to form the complete system. Thus, it is composed of three essential blocks, which are as follows: a robotic platform on which the different tests of the implemented obstacle detection system have been carried out, the SpiNNaker neuromorphic platform, which simulated the SNN used and whose design is detailed in Section <ref>, and a computer that handled the data exchange between both external platforms and which performs the rest of data computation. These three blocks are shown in the general diagram of the system, presented in Figure <ref>, which also shows how information is transmitted through it.
Figure <ref> presents the robotic platform used. The most important hardware elements have been highlighted with letters A, B and C for a better identification in the image. A and B are a Romeo BLE control board and an Adafruit HUZZAH32, respectively. While the Romeo BLE platform takes care of all the computing related to the robotic platform, the latter is just a bridge for proper communication over UDP with the computer. Both have been programmed using Arduino. Finally, C corresponds to an ultrasonic sensor mounted on a servomotor that allows rotary movements. However, the servomotor was not used in this work and the ultrasonic sensor always pointed in a fixed direction (forward).
Obstacle detection starts at this point. Initially, the robot is continuously reading the distance values thanks to the ultrasonic sensor. These values are transmitted from the Romeo BLE board to the Adafruit HUZZAH32 via the serial port, and then to the computer via UDP.
At this point, the computer is responsible for generating a spike train that changes according to the received values, which are the measurements obtained from the ultrasonic sensor. This spike train is sent to and processed by SpiNNaker, and then the result is returned via the computer to the Adafruit HUZZAH32 board in the form of UDP packets representing the spikes of the output neuron of the implemented SNN, whose design is also shown in Section <ref>. The spikes are then transmitted to the Romeo BLE via the Adafruit HUZZAH32 thanks to a simple wired protocol.
When these spikes arrive back to the robotic platform, the Romeo BLE is expected to execute the commands of a collision avoidance algorithm. Since collision avoidance is outside the scope of this work, it has been implemented by simply turning to the right until no obstacles are detected, which implies no spike reception for a period of 500 ms. This value could be reduced, although it is convenient to use relatively high times to allow the transmission of information, especially because of the delays produced by the use of WiFi.
Another important aspect to detail in this section is how the measurements are obtained thanks to the ultrasonic sensor. Since the system is intended to be reliable in order to avoid false positives when detecting obstacles, and also knowing that there are a large number of cases in which erroneous measurements can be obtained due to the way the sensing is performed using ultrasounds, an algorithm has been implemented with the aim of increasing this reliability by means of measurement redundancy. This, on the other hand, increases the computational and temporal cost of the system, but not enough to seriously affect its real-time capability. This algorithm, whose pseudocode is shown in Algorithm <ref>, is based on setting a number of measurements, maxHits, to be produced redundantly, which means that these measurements are within a margin of error from the first measurement taken as the reference measurement. If all the measurements are within this margin, this first measurement is taken as correct and it is transmitted to the computer. Otherwise, i.e., when a measurement is found that is not within this margin, it is taken as the new reference measurement and the process is repeated for the next maxHits measurements.
In this work, we considered maxHits to be equal to 4 and a maximum error of 120 ms (approximately 2 cm) in the calculation of ultrasonic measurements. If maxHits were lower than 4, the speed with which the effective measurements are obtained would be increased, which would result in a small increase in the real-time capability of the system, but the reliability of the resulting measurement would be decreased. Increasing maxHits would have the opposite effect. On the other hand, the value set as maximum error should be more than enough to obtain correct measurements, since the HC-SR04 ultrasonic sensor has a theoretical accuracy in the order of millimeters (±3 mm).
§ SPIKING NEURAL NETWORK
This section provides information on the structure of the SNN used, whose design is shown in Figure <ref>. This neural network is implemented by using two neurons: the live injector mentioned in Section <ref> and an output neuron, resulting in a low-latency model that requires the use of very few resources. The behavior of the output neuron is defined by the LIF neuron model. An excitatory synapse with a delay of 1 time step (1 ms in this case, since it is the default time step in sPyNNaker) is used to connect the live injector to the output neuron.
Both neurons in the SNN have very specific functions which must be studied in depth to understand how obstacle detection is performed in the system. Thus, a different subsection is dedicated to each of the neurons. While the live injector is in charge of generating a frequency-variable spike train from the digital information obtained from the ultrasonic sensor, the output neuron has the function of processing this spike train to be able to decide whether an object is too close (it is considered an obstacle) or not.
§.§ On how to encode digital information into spike trains
In a neuron, there are two basic states: one in which the neuron does not fire, i.e., the membrane potential is below the threshold potential, and one in which it does. Both states could be associated to Boolean values by the absence (0 or false) or existence (1 or true) of an output spike fired at a given time instant. Note that neurons should be in the first state most of the time, following the principle of low power consumption of SNN.
Both states are directly related to spikes, and then, they are important to understand how information is encoded when using SNN.
In <cit.>, several existing encoding methods in SNN are deeply detailed from a biological point of view. These methods are useful for converting digital information into spiking information, each of them having a series of advantages and disadvantages that make it more suitable for specific cases. For example, an array of neurons could be used to fire according to the binary encoding of a digital number, which would imply using one neuron for each bit in the binary encoding. In addition, it would be necessary to synchronize their output spikes to ensure that they are part of the same representation of the input number, and not that of an earlier or later encoded number. Thus, this method would be related to temporal coding. However, the number of neurons used could be reduced by using a single neuron whose firing rate encodes that number, which would be related to rate coding.
Rate coding is used in this work to reduce the amount of resources (neurons and synapses) required, providing a low-cost solution for obstacle detection task using SNN. The live injector shown in Figure <ref> is responsible for encoding the information obtained by the ultrasonic sensor into spikes. In this way, every time a new data (the ToF, i.e., the time elapsed since an ultrasonic wave is sent from the ultrasonic sensor until it is received after bouncing off the obstacle) arrives to the computer, it is used to calculate the new ISI of the live injector, that refers to the time that must pass between two generated spikes. Simultaneously, the computer also calculates the actual time difference between when the last spike was generated (or fired) and the current time. When this value is greater than or equal to the calculated ISI, the live injector must fire again. The firing rate is the inverse of this actual time difference, and it should be approximately equal to the inverse of the calculated ISI. The ISI, in seconds, is calculated using the formula presented in Equation <ref>.
ISI = ( x/5883)^2 + 0.001
In this formula, x refers to the ToF, in microseconds, provided by the ultrasonic sensor and is manually limited to 5883 μs as maximum, which is approximately the ToF of an ultrasonic wave that bounces at a distance of 100 cm from the ultrasonic sensor. Distances greater than 100 cm are limited to this value to reduce the range of distances to be considered in the formula. Note that a distance of 100 cm would not make the system to fire. In this way, the first term of the equation allows the values of the calculated ISI to be delimited in the range [0, 1] seconds. Having a delimited range of values is critical in the development of real-time robotic systems since they must be inherently deterministic. This term is squared to provide greater differentiation of the calculated ISI, especially for high and intermediate x values. In order to avoid unexpected behaviors of the system, a second term has been added to ensure that the minimum ISI is 1 ms, which is the simulation time step. Therefore, the calculations are bounded between 0.001 seconds and 1.001 seconds, which are associated to firing frequencies that range between 1000 Hz and 1 Hz, respectively.
§.§ On how spike trains are processed by the system
Each of the spikes generated by the live injector, following the process explained in the previous subsection, is part of a frequency-variable spike train that propagates to the output neuron through an excitatory synapse, as shown in Figure <ref>. As already discussed, this output neuron has the function of detecting the presence of an obstacle based on that spike train. The complexity of this task is related to the adjustment of the distance from which an object can be considered as an obstacle, which will be called threshold distance from now on. To adjust this threshold distance and, in general, to process the input spike train, it is necessary to modify the parameters used for the output neuron. The idea was to make the output neuron more or less sensitive to input stimuli, so that all distances above that threshold distance have an associated firing rate in the spike train that is not sufficient to cause the output neuron to fire. Due to the nature of the application developed, in which a mobile robot must perform obstacle detection to avoid collisions, the initial goal in adjusting the neuron parameters was to achieve a threshold distance between 30 cm and 50 cm.
Table <ref> contains the parameters used for the output neuron. These parameters are almost the same of a default LIF neuron with fixed threshold and decaying-exponential post-synaptic current of PyNN[<https://neuralensemble.org/docs/PyNN/reference/neuronmodels.html#pyNN.standardmodels.cells.IF_curr_exp>] but with three differences. These differences are relevant for several reasons, which can be explained through the equations of the LIF neuron model presented in sPyNNaker <cit.> and are as follows:
* tau_m has been increased from 20 ms to 100 ms. This decreases the absolute value of dV/dt, increasing the time it takes for the membrane potential to reach its resting potential again from an excited state, i.e., the duration of the repolarization and hyperpolarization phases. Given that each input stimulus produces an increase in membrane potential (depolarization), increasing this duration implies that a lower input firing rate is required to cause an overlap between the repolarization phase of one input stimulus and the depolarization phase produced by the next. Such an overlap would cause the depolarization phase to begin at a point where the membrane potential is above the resting potential. This overlap is key to understanding the behavior of the neural network and to understanding how obstacle detection is performed.
* tau_refrac has been set to 0 ms to allow the output neuron to fire each time step of the simulation. In this way, the neuron is able to react instantly to the input stimuli.
* v_thresh has been decreased from -50.0 mV to -59.5 mV. In line with what was explained above for tau_m, the decrease of the threshold potential could be considered as an adjustment to make it easier to cause the neuron to fire after the overlap of the repolarization and depolarization phases. While this overlap depends on tau_m and the frequency of the input stimuli, causing the neuron to fire by means of decreasing v_thresh also depends on the membrane potential increase produced by each of the input stimuli. This change is intended to increase the number of output spikes fired by the output neuron, which is of great interest to improve the real-time capability of the system.
To understand how this spike train processing works, an in-depth theoretical study of the relationship between the frequency of the input stimuli of a neuron and the output spikes fired by that neuron, as well as its membrane potential, has been carried out. This study focuses on the effect of the overlap between the repolarization and depolarization phases.
Figure <ref> shows an example in which the output neuron is provided with an input spike train containing three different firing rates. These are the following: 1) 1 Hz from 0 ms to 2000 ms. 2) 2 Hz from 2000 ms to 4000 ms. 3) 10 Hz from 4000 ms to 5000 ms.
The upper graph shows a comparison between the theoretical membrane potential of the output neuron using the equations presented in <cit.> and the empirical one obtained from the SpiNNaker hardware platforms after simulation. Note that both the theoretical and empirical membrane potentials are exactly the same since these equations are also used by SpiNNaker. The upper black line delimits the threshold potential, while the lower black line delimits the minimum potential that the neuron must have so that, when receiving an input spike, the neuron fires. The middle graph shows the synaptic currents induced in the output neuron in response of the arrival of input spikes.
It should be noted that, having an excitatory synapse with weight equal to 1 nA connecting the live injector to the output neuron, it is theorized that the synaptic current should be increased by 1 mA each time an input spike arrives. However, as explained in <cit.>, SpiNNaker uses the Euler's method to solve the differential equations of the LIF model and, in order to correct the intrinsic cumulative error of this solution, synaptic currents are decayed. Thus, there is a small difference from the theoretical value of these currents and they have to be measured after simulation. In this way, the induced synaptic current by each input spike is approximately 0.9063 mA. Using this current value, the parameters used for the output neuron shown in Table <ref> and the theoretical equations, it is possible to calculate the value of the minimum firing potential, which is approximately equal to -63.569 mV.
Receiving an input spike in a state in which the membrane potential of the neuron is above this minimum firing potential causes the neuron to fire, producing an output spike. This is why this minimum firing potential is key to understanding how the input spike train is processed.
In order to explain how the output neuron behaves, it is of great interest to calculate the time during which the neuron is able to fire. From now on, this time frame will be called firing window. There are two simple cases for which the calculation of this firing window is done in different ways:
* When an input spike arrives, the induced synaptic current ensures that, at some point, a maximum membrane potential (dV/dt = 0, V ≠ V_rest) is reached that is above the minimum firing potential. Although the membrane potential may be below the minimum firing potential at the time the input spike arrives, the induced synaptic current is sufficient to cause the neuron to fire upon the arrival of another input spike even before the minimum firing potential is reached. Therefore, in this case, the firing window is calculated as the time difference between the arrival of the input spike and the reaching of the associated maximum potential.
* Another firing window is calculated from the time difference between the reaching of the maximum potential and, after that, the reaching of the minimum firing potential.
The spike train used in Figure <ref> does not consider complex cases in order to facilitate the reader's understanding, since these are not within the scope of this work. In the lower graph, firing windows are shown in orange. The ISI of the input spikes is also shown in magenta. While red crosses indicate that an output spike was fired after receiving an input spike, magenta crosses indicate that no output spike was fired. In this way, it should be noted that output spikes were fired only when the ISI of the input spikes was lower than the firing window. When output spikes were fired, the firing window was modified.
Thus, this example shows how the output neuron is not able to fire when an input spike train is provided with a firing rate of 1 Hz or 2 Hz, but it does when the firing rate is 10 Hz, since the value of the ISI becomes lower than the current firing window. As explained in Section <ref>, higher firing rates are associated with objects more closely located to the ultrasonic sensor, which are more likely to be considered obstacles.
§ EXPERIMENTATION AND RESULTS
Different experiments were carried out to test the performance of the system. These experiments can be classified into different types, depending on what aspect of the system was intended to be tested. In this section, some of the most relevant are explained in depth and their results are presented to verify and highlight the validity of the system for real-time obstacle detection. The list of experiments presented in this section is as follows:
* Two tests to confirm the value of the threshold distance.
* One test to verify the response of the output neuron to increasing and decreasing distances.
* Two tests to prove the real-time capability of the system.
* One test to prove the correct functioning of the complete system in real environments.
All these experiments start with a default ToF of 5883 μ s, which corresponds to a distance of approximately 100 cm between the sensor and the object, meaning that when this value appears in the graphs the computer had not yet received any measurements.
Note that the results of these experiments show unexpected temporary gaps between the spikes of the output neuron. The current state of the membrane potential plays a key role here, in line with what is explained in Section <ref>. In this way, although there is a straightforward relationship between the firing rate of the input spike train and the firing rate of the output neuron, there are small differences in the rates (i.e., these gaps) which are produced because of how the membrane potential of the output neuron is always varying in response to input spikes.
§.§ Threshold distance
As previously explained in this paper, the threshold distance depends on the parameters used for the output neuron. This experiment aimed to verify that the threshold distance is within the desired range (between 30 cm and 50 cm, in this case) by using artificial measurements generated by the Romeo BLE board that simulate the ToF values obtained thanks to the ultrasonic sensor, and which are related to the distance between the sensor and the object. Multiple tests were performed for multiple distances between 10 cm and 50 cm.
In particular, the threshold distance could be approximated by finding that, for a certain distance, output spikes were still fired, but for the next further distance this was no longer the case. Specifically, these distances were 39 cm and 39.5 cm, respectively. This guarantees that the threshold distance is between these two values, so it could be approximated to 39 cm.
Figure <ref> shows the output neuron response for both cases. The graph at the top shows that, for a distance of 39 cm, the neuron was still firing spikes. However, the ISI of these output spikes was generally high because it took longer to reach sufficient excitation to cause each spike. Increasing the distance to the object decreases the frequency of the input spike train, being the overlap between repolarization and depolarization phases a bit smaller, so there is a point where it is not enough to cause the output neuron to fire. This is what happens just between the distances of 39 and 39.5 cm, as can be seen for the latter case shown in the graph at the bottom of Fig. <ref>.
§.§ Increasing and decreasing distances
In this experiment, different measurements were also artificially sent from the Romeo BLE board. However, these ToF values increased and decreased over time, which was directly related to increasing and decreasing the distance between the sensor and the obstacle. Initially, there is a part where the values increased and decreased linearly. Then the measurements changed abruptly, with larger or smaller jumps in the values. This last part will be the focus of the next experiment.
Figure <ref> shows the results of this experiment. As it can be observed, for low ToF values (from 60 μ s) the firing rate of the output neuron was high. As this distance increased (up to 3000 μ s), the firing rate decreased. Another of the most interesting aspects of this experiment is that it can be clearly observed how upon reaching a minimum firing rate, associated to the threshold distance, this firing rate became insufficient to make the neuron firing, as explained in the first experiment. This is why a region appears in which no spikes were fired and the membrane potential of the neuron started to decrease, as distance was being increased above the threshold distance. After that, the reverse process occurs, with the distance decreasing and the firing rate increasing sufficiently for the repolarization/depolarization overlap to cause the neuron to start firing again.
§.§ Real-time capability
The purpose of this experiment is to ensure that the system is able to react quickly enough to objects that appear spontaneously in front of the robot. A study of the real-time capability of the system involves not only checking that the reaction time is low enough not to affect the behavior of the system, but also checking that the reaction time is deterministic, i.e., that it is within a range or bounded.
Figure <ref> shows the result for two tests of this experiment. The two graphs at the top shows the results for a test in which the object appeared at a certain distance from the ultrasonic sensor, while the two graphs at the bottom shows the results for a test in which different distances were being tested.
In the first case, the distance at which the object appeared was below the threshold distance, so whenever the object appeared the system fired output spikes, meaning that the object had been considered an obstacle. Because the appearing distance (about 25 cm) does not correspond to the closest distance at which the object could be found, the calculated ISI for the generation of the input spike train, which were around 60 ms, were not the lowest. Thus, the firing rate is not particularly high. In this way, the ISI of the output spikes range from approximately 60 ms (which would be the minimum, due to the fact that it is the ISI of the input spike train) to approximately 130 ms.
In the case of the graph at the bottom, it can be seen in more detail how for longer distances lower firing rates were achieved. This is directly related to a lower repolarization/depolarization overlap in membrane potential produced in response to input spikes. In addition, there were different distance levels at which no output spikes were fired, meaning that these distances were above the threshold distance.
§.§ Real environment
This experiment was intended to verify that the complete system worked as expected in a real environment. The robotic platform was positioned in the center of an area surrounded by rectangular cardboard boxes.
The complete system is defined so that the robotic platform moves forward as long as no obstacles are detected and, if obstacles are detected, it turns to the right until obstacles are no longer detected.
The results obtained during the execution of one of the tests of this experiment are shown in Figure <ref>. In this test, it can be seen how the system started by measuring an obstacle at a distance of 100 cm or more. Over time, as the robotic platform moved forward, this distance was reduced, which can be observed as a linear decrease in distance. When this distance dropped below the threshold distance, the system detected the obstacle (output spikes appeared) and sent the command to the robotic platform to start turning to the right until the obstacle was no longer detected. At that point, the output neuron stopped firing spikes and the distance measured by the ultrasonic sensor increased, with this new measurement corresponding to the distance between the ultrasonic sensor and the closest object it could find in the new direction it was pointing.
§ DISCUSSION AND FUTURE WORK
The results obtained and shown in Section <ref> seem to be good enough to validate the implemented system. The study of the firing windows seems to be useful to understand to some extent how and why the output neuron responds to certain input stimuli and how it relates to the ISI. In this way, the implemented SNN, whose essence lies in the functioning of the LIF neuron, works as a high-pass spiking filter, where the SNN does not respond to low frequencies of the input spike train. This is really interesting since it could be the basis for the study of new applications based on spiking filters using this approach.
To improve the results obtained in this work, it could also be interesting to study the implementation of the proposed SNN in other neuromorphic platforms, with the purpose of minimizing the ISI of the input spike train (adjusting at the same time the presented formula in Section <ref> to these values), which would allow to obtain a better performance of the system for the obstacle detection task.
Since the robotic platform is an independent block within the complete system, any modification can be made to it. This means that it would be possible to increase the number of range sensors of the same type used for the obstacle detection task, as long as an algorithm is implemented so that the measurement sent to the central computer is unique, or the network architecture is replicated for each of the sensors. On the other hand, it is also possible to use different types of range sensors. This is interesting because one of the possible modifications to the system could be to use infrared sensors to support the ultrasonic sensors, whereby it would be possible to try to smooth out or eliminate the problems inherent in ultrasonic sensors, especially with regard to the sound reflection angles that prevent correct measurements in certain cases. Moreover, this second sensor should help to increase the reliability of the obtained measurements.
Section <ref> discusses some fundamental details to understand why the parameters used for the output neuron have been chosen. Thus, it is explained that the decrease in the threshold potential of the neuron is important to increase the number of spikes produced in the output response of the system, which is directly related to its performance in real time. It is true that this would imply an increase in the power consumption of the network, despite the fact that it is generally tried to squeeze as much as possible out of the low power consumption of SNN.
However, sometimes the priority is to increase the performance of the entire system, which requires an increase in power consumption, so a decision would have to be made in the future (when refining such an application) on how much to increase performance without increasing power consumption too much, finding a balance. This debate has been present throughout the history of computers.
§ CONCLUSIONS
This paper discusses the interest of obstacle detection from the perspective of robotics and neuroscience, and propose the implementation of a SNN-based system to perform obstacle detection to exploit the advantages of these bio-inspired architectures. An in-depth explanation of the functioning of the implemented SNN is given, containing details regarding how the input spike train is generated and how it is processed to perform obstacle detection in the output neuron. It also contains an in-depth analysis of how the ISI of this input spike train is related to the output firing rate of the system.
A series of experiments were carried out from different approaches: the threshold distance bounding, the response to increasing and decreasing distances, the response to sudden distance jumps and, finally, the functioning of the system in a real environment. The results obtained for each of the experiments validates the implemented system. In this way, it is stated how the output firing rate is related to the distance measured by the ultrasonic sensor, increasing when it decreases and vice versa.
In order to understand in detail the functioning of the implemented system, an in-depth study of how information is encoded and processed is carried out. It has been discussed how this analysis could be very useful for the development of applications based on spiking filters.
Acknowledgements This research was partially supported by the Spanish project MINDROB (PID2019-105556GB-C33) funded by MCIN/AEI/10.13039/501100011033. D. C.-M. was supported by a “Formación de Profesor Universitario” Scholarship from the Spanish Ministry of Education, Culture and Sport.
Conflict of interest
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Data availability
Not applicable.
Ethics approval
Not applicable.
Consent to participate
Not applicable.
Consent for publication
Not applicable.
|
http://arxiv.org/abs/2409.02804v1 | 20240904152448 | Primordial regular black holes as all the dark matter (I): tr-symmetric metrics | [
"Marco Calzà",
"Davide Pedrotti",
"Sunny Vagnozzi"
] | gr-qc | [
"gr-qc",
"astro-ph.CO",
"hep-ph",
"hep-th"
] |
[email protected]
Department of Physics, University of Trento, Via Sommarive 14, 38123 Povo (TN), Italy
Trento Institute for Fundamental Physics and Applications (TIFPA)-INFN, Via Sommarive 14, 38123 Povo (TN), Italy
M.C. and D.P. contributed equally to this work
[email protected]
Department of Physics, University of Trento, Via Sommarive 14, 38123 Povo (TN), Italy
Trento Institute for Fundamental Physics and Applications (TIFPA)-INFN, Via Sommarive 14, 38123 Povo (TN), Italy
[email protected]
Department of Physics, University of Trento, Via Sommarive 14, 38123 Povo (TN), Italy
Trento Institute for Fundamental Physics and Applications (TIFPA)-INFN, Via Sommarive 14, 38123 Povo (TN), Italy
§ ABSTRACT
Primordial black holes (PBHs) are usually assumed to be described by the Schwarzschild or Kerr metrics, which however feature unwelcome singularities. We study the possibility that PBHs are non-singular objects, considering four phenomenological, regular tr-symmetric space-times, featuring either de Sitter or Minkowski cores. We characterize the evaporation of these PBHs and constrain their abundance from γ-ray observations. For three of the metrics, including the well-known Bardeen and Hayward ones, we show that constraints on f_pbh, the fraction of dark matter (DM) in the form of PBHs, weaken with respect to the Schwarzschild limits, because of modifications to the PBH temperature and greybody factors. This moves the lower edge of the asteroid mass window down by up to an order of magnitude, leading to a much larger region of parameter space where PBHs can make up all the DM. A companion paper is instead devoted to non-tr-symmetric metrics, including loop quantum gravity-inspired ones. Our work provides a proof-of-principle for the interface between the DM and singularity problems being a promising arena with a rich phenomenology.
Primordial regular black holes as all the dark matter (I):
tr-symmetric metrics
Sunny Vagnozzi
September 9, 2024
================================================================================
§ INTRODUCTION
The Standard Model of particle physics (SM) and General Relativity (GR) have proven to be extremely successful at describing a huge range of terrestrial, astrophysical, and cosmological observations <cit.>. However, their successes are limited by a number of shortcomings, potentially (especially in the case of SM) pointing towards the need for new physics which may better describe the matter and gravity sectors. On the more observational/phenomenological side, the SM lacks a candidate for the dark matter (DM) which accounts for ≃ 25% of the energy budget of the Universe <cit.>. On the more theoretical side, continuous gravitational collapse in GR leads to the pathological appearance of curvature singularities <cit.>. The nature of DM and the singularity problem are arguably two among the most important open questions in theoretical physics.
The solution to the former problem could reside in the physics of some of the most peculiar objects in the Universe: black holes (BHs). It has long been realized that primordial BHs (PBHs), hypothetical relics from the primordial Universe formed from the collapse of large density perturbations upon horizon re-entry, are indeed excellent DM candidates <cit.> (see e.g. Refs. <cit.> for reviews): in fact, PBHs are the only viable DM candidate which does not invoke new particles surviving to the present day. Once believed to merely be objects of mathematical speculation, observational effects associated to BHs are now routinely detected <cit.>, turning these objects into extraordinary probes of fundamental physics <cit.>. As a result, the possibility of PBHs accounting for the entire DM budget is severely constrained by a wide range of considerations and constraints. The only (not entirely undebated) remaining open window of parameter space where PBHs could make up all the DM is the so-called “asteroid mass window”, roughly for PBH masses 10^14 kg≲ M_pbh≲ 10^20 kg <cit.>: lighter PBHs would have evaporated fast enough to either have disappeared by now or overproduced γ-rays in the MeV range, whereas heavier PBHs would have been detected through the microlensing of background stars.
Almost all works on PBHs assume that these are Schwarzschild or Kerr BHs <cit.>. All constraints and considerations on DM potentially being in the form of PBHs are therefore subject to this underlying assumption. The existence of the asteroid mass window, and the extension thereof, is of course no exception. The assumption in question is not at all unreasonable at least from the phenomenological point of view, given that there are at present no signs of tension between astrophysical observations and the Kerr-Newman family of metrics, and more generally the no-hair theorem. Nevertheless, from the theoretical point of view such an assumption might stir some unease, given the appearance of singularities in the Schwarzschild and Kerr metrics. The above considerations naturally lead to the following question: “what if PBHs are non-singular”? It is our goal in the present work to systematically address this question, which naturally merges the DM and singularity problems.
We entertain the possibility that PBHs are “regular”, i.e. free of curvature singularities <cit.>, and therefore that DM may be in the form of primordial regular BHs (PRBHs). For concreteness, we consider so-called tr (time-radius)-symmetric metrics, for which the product of the coefficients for the dt^2 and dr^2 components of the line element in four-dimensional Boyer–Lindquist coordinates is equal to -1, and the function which multiplies the angular part of the line element is r^2, i.e. r is the areal radius. More specifically, we focus on the following four regular, static spherically symmetric space-times, all of which are characterized by an additional regularizing parameter ℓ and recover the Schwarzschild space-time in the ℓ→ 0 limit: Bardeen BHs <cit.>, Hayward BHs <cit.>, Ghosh-Culetu-Simpson-Visser BHs <cit.>, and a Hayward-like BH which, to the best of our knowledge, is introduced for the first time in this work. These four space-times present a rich phenomenology, including both de Sitter and Minkowski cores, and temperatures which can both decrease and increase with increasing regularizing parameter ℓ. We focus our attention on observational constraints from PRBH evaporation, which set the lower limit of the asteroid mass window, discussing in detail how the evaporation process is modified with respect to that for Schwarzschild PBHs. We show that, as a result, the phenomenology of PRBHs can be quite different from that of Schwarzschild PBHs, with a larger range of masses where PRBHs could make up the entire DM component, opening up the asteroid mass window by up to an extra decade in mass. Keeping in mind that the metrics in question are phenomenological in nature, our results demonstrate that a common solution to the DM and singularity problems in the form of primordial regular BHs is one which is worth taking seriously, and warrants further investigation, and more generally the interface of these two problems provides a promising arena. We stress that our work should not be intended as a comprehensive analysis of primordial regular BHs, but rather as a pilot study, pointing towards a direction which has thus far received very little attention and indicating promising directions for further work.
The rest of this paper is then organized as follows. In Sec. <ref> we briefly introduce the regular space-times studied in the rest of the work. Various aspects of our methodology are discussed in Sec. <ref>, with Sec. <ref> devoted to the calculation of the so-called greybody factors, Sec. <ref> to the computation of photon spectra resulting from Hawking evaporation, and Sec. <ref> to the comparison against observations. The resulting limits on the fraction of DM which may be in the form of primordial regular BHs are then critically discussed in Sec. <ref>. Finally, in Sec. <ref> we draw concluding remarks. A number of more technical aspects concerning the greybody factors computation are discussed in Appendix <ref>. Unless otherwise specified, we adopt units where G=c=1. In closing, we note that a related study is being presented in a companion paper <cit.>: this focuses on non-tr-symmetric metrics, which also include loop quantum gravity-inspired metrics, but at the same time complicate the study of the evaporation process. We recommend that the interested reader go through the present work prior to consulting our companion paper <cit.>.
§ REGULAR BLACK HOLES
It is well known, thanks to the Penrose-Hawking singularity theorems, that continuous gravitational collapse in GR sourced by matter contents satisfying reasonable energy conditions leads to the appearance of singularities <cit.>. These are regions of space-time where curvature invariants, i.e. sets of independent scalars constructed from the Riemann tensor and the metric, diverge (with the archetypal example being the central singularity in the Kerr-Newman family of metrics). These singularities are arguably unsatisfactory as they lead to a potential breakdown in predictivity. For this reason, they are oftentimes regarded as a manifestation of our lack of knowledge of (new) physics in the high-energy/high-curvature regime. A widespread belief is that quantum gravity effects on these scales would ultimately cure the singularity problem (and potentially lead to observable effects), although this is more of a hope supported only by a few first-principles studies <cit.>.
Even in the absence of a widely agreed upon theory of quantum gravity, one can still hope to make progress in understanding and taming singularities, while also potentially gaining intuition about the possible features of such a theory, through a more phenomenological approach. Under the assumption that a metric description holds valid, one can introduce metrics which are free of singularities in the entire space-time, and describe so-called regular BHs (RBHs) <cit.>. It is often (albeit not necessarily always) the case that RBH metrics are controlled by an extra parameter, which in what follows we shall refer to as regularizing parameter (and denote by ℓ), typically recovering the Schwarzschild metric (for non-rotating RBHs) in the limit ℓ→ 0. Several RBH metrics have been studied over the past decades, see e.g. Refs. <cit.> for an inevitably incomplete selection of examples, as well as Refs. <cit.> for recent reviews on the subject. [Another interesting possibility are gravastars, which are not RBHs in a strict sense <cit.>.] While most of these metrics have been introduced on purely phenomenological grounds it is known that possible sources for several RBH metrics lie in theories of non-linear electrodynamics <cit.>.
As alluded to earlier, our interest in this work is to explore the possibility that primordial RBHs may play the role of DM. As a proof-of-principle in this sense we will establish constraints on f_pbh, the fraction of DM in the form of PRBHs, focusing on the asteroid mass window, whose extent we will show can be either opened or tightened depending on the features of the RBH metric. To the best of our knowledge, we are aware of only four works in this direction <cit.>. Ref. <cit.> studied the thermodynamics of primordial regular BHs, focusing however on the case where they do not evaporate, and therefore did not study constraints on f_pbh. Ref. <cit.> studied the evaporation of a loop quantum gravity-inspired BH, and weakened constraints on f_pbh were reported in a later proceeding (which however does not appear to be widely known). Finally, Ref. <cit.> studied signatures of primordial BHs with magnetic charge, which could be (as is often but necessarily the case) regular. The aim of this pilot study and our companion paper <cit.> is instead to provide a more comprehensive investigation of primordial regular BHs, considering a more diverse set of metrics and investigating constraints on f_pbh in detail.
In our work, we shall consider four different non-rotating RBH metrics, as discussed in more detail in the following subsections. The static, spherically symmetric space-times we investigate are a subset of the Petrov type-D class of metrics. In four-dimensional Boyer-Lindquist coordinates, their line elements take the following general form:
ds^2 = -f(r)dt^2 + g(r)^-1dr^2 + h(r)dΩ^2 ,
where dΩ^2=dθ^2 +sin^2(θ) dϕ^2 is the metric on the 2-sphere. We also require our space-times to be asymptotically flat, which amounts to the following requirements:
f(r) 1 , g(r) 1 , h(r) r^2 .
In addition, as stated earlier, we require our space-times to be tr-symmetric (the non-tr-symmetric case is covered in a companion paper <cit.>), which imposes the following additional conditions:
f(r) = g(r) , h(r) = r^2 ,
implying that the coordinate r is effectively the areal radius. With the conditions given by Eqs. (<ref>,<ref>) imposed upon Eq. (<ref>), our most general line element therefore takes the following form:
ds^2 = -f(r) dt^2 + f(r)^-1dr^2 + r^2 ( dθ^2 +sin^2(θ) dϕ^2 ) .
In what follows, we refer to the function f(r) as being the “metric function”. The four different RBH solutions we consider, which we will discuss very shortly in Sections <ref>– <ref>, are characterized by different functional forms of f(r).
An important parameter characterizing the behaviour of BHs is their temperature T. This is particularly crucial when evaluating evaporation constraints on PBHs, given that the temperature controls the strength of the emitted radiation, which in turn can be directly constrained by various observations. We treat the temperature of the RBHs as being the usual Gibbons-Hawking one, i.e. the one evaluated by Wick rotating the metric in the standard way and imposing regularity in the Euclidean period <cit.>. The cyclic imaginary time → temperature identification is legitimate if one can formally identify the Euclidean action e^-S with the Boltzmann factor e^-β H in the partition function, as usually done in finite temperature quantum field theory: in turn, this can be done if one is assuming the standard Boltzmann-Gibbs distribution, but may not be the consistent if other entropies are assumed (see e.g. the recent discussion in Ref. <cit.>). Since, as we will reiterate later, the RBHs we study are introduced on phenomenological grounds and we remain agnostic as to their theoretical origin (which may in principle be rooted within alternative entropic frameworks), in what follows we assume the Boltzmann-Gibbs distribution, so that the temperature of the RBHs in question is the standard Gibbons-Hawking one, and is given by the following:
T=κ/2π=f'(r)/4π|_r_H ,
where the prime indicates a derivative with respect to r, and κ is the BH surface gravity, given by the following:
κ=f'(r)/2|_r_H .
In Eqs. (<ref>,<ref>), r_H is the horizon radius, which is the solution to the following equation:
g^-1(r_H)=f(r_H)=0 ,
with the first equality following from the choice of focusing on tr-symmetric space-times. In the case of Schwarzschild BHs, where the metric function is f(r)=1-2M/r, one recovers the well-known expressions r_H=2M and T_Sch=1/8π M. However, in more general space-times the horizon radius in Eq. (<ref>) is not guaranteed to have a closed form expression, and the same therefore holds for the temperature in Eq. (<ref>). In Fig. <ref> we show the evolution of the temperatures (normalized to the temperature of Schwarzschild BHs, T_Sch=1/8π M) of the four regular space-times we will introduce shortly, as a function of the regularizing parameter ℓ (normalized by the event horizon radius r_H). As the Figure clearly shows, for three of these space-times (Bardeen, Hayward, and Ghosh-Culetu-Simpson-Visser) the temperature is a monotonically decreasing function of ℓ, whereas for the fourth (Hayward-like), after a small initial dip, the temperature grows monotonically with growing ℓ. As we will see, these different behaviour turn out to have important phenomenological consequences.
A final caveat is in order before discussing the RBH metrics we consider. The latter are all regular in the sense of having finite curvature invariants R ≡ g^μνR_μν, R_μνR^μν, and K≡ R_μνρσR^μνρσ. However, a more stringent criterion for regularity is that of geodesic completeness, which does not necessarily imply finiteness of curvature invariants and viceversa. A number of “popular” RBHs have indeed been shown to have finite curvature invariants but to be geodesically incomplete <cit.>. This includes the well-known Bardeen and Hayward RBHs, which are among the ones we shall consider here. However, given the significant interest in these metrics, the fact that they are widely taken as prototypes for RBHs, and our phenomenological goal of going beyond Schwarzschild PBHs, we will take these space-times into consideration, while cautioning the reader about the above issues, and therefore that these metrics should be considered nothing more than phenomenological toy models at this stage. Note, in addition, that the stability of RBHs featuring inner horizons is currently a matter of debate in the literature <cit.>.
§.§ Bardeen black hole
The Bardeen BH is easily one of the best known RBHs, and one of the first ones to ever have been proposed <cit.>. It is characterized by the following metric function: [We note that in all the metrics considered, the parameter M appearing in the metric function can always be unambiguously identified with the BH mass (either the Komar, ADM, Misner-Sharp-Hernandez, or Brown-York mass). This is important as the later constraints on f_pbh as a function of M_pbh will identify the latter with M.]
f_B(r)=1-2Mr^2/(r^2+ℓ^2)^3/2 ,
where, in terms of the BH mass M, the regularizing parameter satisfies ℓ≤√(16/27) M ∼ 0.77 M in order for the metric to describe a BH and not a horizonless object. Note that the Schwarzschild metric function is recovered in the ℓ→ 0 limit. A perhaps physically more motivated choice is to express quantities in units of the horizon radius r_H, defined as the largest root of the equation f(r_H)=0, in which case the regularizing parameter is subject to the constraint ℓ≲ 0.70 r_H. In order to obtain this limit we have computed the solution to f(r_H)=0 fixing M=1, in order to extrapolate r_H(ℓ), before analyzing for which real values of the parameter n the equation ℓ=nr_H(ℓ) admits solutions.
It is worth noting that the Bardeen BH possesses a de Sitter (dS) core which replaces the central singularity of the Schwarzschild BH. This is evident by noting that, in the limit r → 0, the metric function goes as f_B(r) ∝ r^2, exactly as expected for an asymptotically dS space-time. Although originally introduced on phenomenological grounds, it is now known that the Bardeen RBH can emerge from a magnetic monopole source <cit.>, potentially within the context of a specific non-linear electrodynamics theory <cit.>. Another possible origin for the Bardeen RBH are quantum corrections to the uncertainty principle <cit.>. Irrespective of its origin, and consistently with the approach pursued for the other space-times, we consider this solution as a model-agnostic phenomenological toy model.
§.§ Hayward black hole
Another widely known RBH space-time is the Hayward RBH <cit.>, characterized by the following metric function:
f_H(r)=1-2Mr^2/r^3+2Mℓ^2 .
If expressed in terms of BH mass M, the regularizing parameter for the Hayward BH is subject to the same limit as that of the Bardeen BH, i.e. ℓ≤√(16/27) ,M. On the other hand, if expressed in terms of the more physically motivated horizon radius, the limit is instead ℓ≲ 0.57 r_H. We note that the Schwarzschild metric is function is recovered in the ℓ→ 0 limit.
Just as the Bardeen RBH possesses a dS core, so does the Hayward RBH. Indeed, introducing a dS core characterized by a (positive) cosmological constant Λ= 3/ℓ^2 in order to prevent the central singularity was precisely the original justification for the Hayward BH which, just like its Bardeen counterpart, was introduced on purely phenomenological grounds. Nevertheless, potential theoretical origins for the Hayward BH have been investigated, and range from corrections to the equation of state of matter at high density <cit.>, finite density and finite curvature proposals <cit.>, theories of non-linear electrodynamics <cit.>, and more generally as the result of corrections due to quantum gravity <cit.>. Just as with the Bardeen RBH, we shall here treat the Hayward RBH as a model-agnostic phenomenological toy model for a singularity-free space-time.
§.§ Ghosh-Culetu-Simpson-Visser black hole
The regular space-times considered so far feature dS cores, which in itself is a very common feature of several RBH metrics. Nevertheless, another interesting phenomenological possibility consists in considering “hollow” RBHs in which the central singularity is replaced by an asymptotically Minkowski core, where the associated energy density and associated pressure asymptote to zero. This is quite unlike the case of the dS core where the energy density asymptotes to a finite value associated to a positive cosmological constant, and the pressure asymptotes to an equal but opposite value. Possible theoretical/mathematical motivations for considering RBHs with Minkowski cores include the fact that the vanishing energy density can significantly simplify the physics in the deep core, whereas the otherwise messy solutions to polynomical equations (which often cannot be written down in closed form) can be traded for arguably more elegant special functions, resulting in the space-time being more tractable. Our physical motivation in considering this class of BHs is instead to broaden the range of physical properties and phenomenological implications of PRBHs, going beyond the dS core RBHs studied thus far.
With this in mind, we consider a RBH featuring a Minkowski core, independently studied by Ghosh <cit.>, Culetu <cit.>, as well as Simpson and Visser <cit.>. Although such a RBH does not have any particular name associated to it in the literature, here we shall conform to the name introduced in Ref. <cit.>, referring to it as GCSV BH (from the initials of the four authors above). This space-time is characterized by the following metric function:
f_GCSV(r)= 1-2M/rexp ( -ℓ/r ) .
The horizon radius r_H, for which a closed form expression is not available in the Bardeen and Hayward cases, is given by:
r_H=-ℓ/W ( -ℓ/2 M ) ,
where W denotes the Lambert function. Considering the principal branch W_0, a real and positive horizon radius is present for:
W_0 ( -ℓ/2 M ) ≤ 0 0 ≤ℓ < 2M/e ,
or, alternatively, 0 ≤ℓ < r_H. While the GCSV BH was original introduced purely on phenomenological/mathematical grounds, it was shown in Refs. <cit.> that such a space-time can emerge within the context of GR coupled to a specific non-linear electrodynamics source. In this case, denoting by g the non-linear electrodynamics coupling constant/charge, the regularizing parameter ℓ is given by ℓ=g^2/2M, with M the BH mass. Nevertheless, as with all the other RBHs considered, here we shall treat the GCSV RBH as a toy model for a regular space-time possessing a Minkowski core.
§.§ Hayward-like black hole
For all of the three RBHs discussed earlier, the temperature decreases with increasing regularizing parameter, while converging to the Schwarzschild value T_Sch=1/8π M as ℓ→ 0, see Fig. <ref>. It is therefore not unreasonable to expect that evaporation constraints on f_pbh may be weakened if the DM consists of these PRHBs (this will of course be checked explicitly later). We have verified that such a behaviour of temperature versus regularizing parameter holds for most PRBHs of phenomenological interest considered in the literature. [This is actually not unrelated to the fact that most RBHs, and more generally hairy BH solutions, feature shadows whose size decreases relative to the size of the Schwarzschild BH shadow, i.e. 3√(3)M, see Ref. <cit.> for further more detailed discussions (see also Refs. <cit.>). In fact, the equations governing BH temperature and shadow size bear some resemblance, so an increase/decrease in one is expected to lead to an increase/decrease in the other. Ref. <cit.> also presents a few space-times whose shadow radii increase with increasing hair parameter, and for these we would expect the temperatures to increase as well. However, for a few of these space-times we explicitly tested (which, we stress, are phenomenological in nature), the Teukolsky equation used to calculate the greybody factors turned out to be numerically very complicated, which is why we opted for proposing the equally phenomenological Hayward-like BH discussed here.] For phenomenological reasons, especially in light of our expectation (verified a posteriori) that such a behaviour would lead to weaker bounds on f_pbh and thereby the asteroid window further opening, it could therefore be desirable to also contemplate a case where the BH temperature actually increases with increasing regularizing parameter.
With the previous considerations in mind, here we introduce a new RBH solution metric which somewhat resembles the Hayward BH. This Hayward-like RBH is characterized by the following metric function:
f_H-l(r)=1 - 2 M r^2/r^3+ℓ ( 1-ℓ r ) .
To the best of our knowledge, this Hayward-like RBH has never been proposed before. Just as the Hayward BH, it is evident that this space-time possesses a dS core, with an effective cosmological constant given by Λ=6M/ℓ. We explicitly verify the finiteness of the curvature invariants, which take the following very simple forms:
R = g^μνR_μν=24 M/ℓ , R_μνR^μν=144 M^2/ℓ^2 ,
K = R_μνσρR^μνσρ=96 M^2/ℓ^2
We later explicitly verify that the intensity of Hawking radiation of the Hayward-like RBH increases with increasing regularizing parameter, therefore tightening constraints on f_pbh, and further closing the asteroid mass window. This behavior is similar to that of Kerr BHs, which display an enhancement of the primary photon emission spectrum with increasing spin parameter <cit.>. A similar behaviour has also been observed in the context of the Kazakov-Solodukhin BH <cit.> in Ref. <cit.>, consistently with the expectation laid out in footnote 3 (note, in fact, that the shadow of the Kazakov-Solodukhin BH increases with increasing regularizing parameter, see e.g. Fig. 1 of Ref. <cit.>, Fig. 18 of Ref. <cit.>, and Fig. 9 of Ref. <cit.>). Finally, we have checked that the allowed limit for the regularizing parameter in terms of the horizon radius is 0≤ℓ≤ r_H.
§ METHODOLOGY
§.§ Greybody factors
A set of parameters playing a key role in describing the Hawking radiation spectra emitted from evaporating BHs are the so-called greybody factors (GBFs). These are functions of energy/frequency and angular momentum which govern the deviation of the emitted spectrum from that of a blackbody <cit.>. Although the emitted Hawking radiation at the horizon takes the blackbody form, the potential barrier due to space-time geometry will attenuate the radiation, so that an observer at spatial infinity will measure a spectrum which differs from that of a blackbody by a frequency-dependent function Γ(ω). GBFs can be characterized by setting up a classical scattering problem around the BH potential barrier, with boundary conditions allowing for incoming wave packets from infinity or equivalently, due to the symmetries of the scattering problem, originating from the horizon. The scattering problem is governed by the so-called Teukolsky equation, which is a partial differential equation describing the propagation of perturbations of given spin in the BH background <cit.>.
For the static, spherically and tr-symmetric metrics given by Eq. (<ref>) which we consider, the Teukolsky equation in spherical coordinates is separable. A key role in computing the GBFs is played by the radial Teukolsky equation, which we now report in full generality for the class of metrics in question. Using the Newman-Penrose (NP) formalism <cit.>, the Teukolsky equation governing the evolution of (massless) perturbations of different spin can be condensed into a single master equation <cit.>:
[ - r^2/f∂_t^2 + s ( r^2 f'/f -2 r ) ∂_t ] Υ_s
+ [ (s+1) (r^2 f' + 2 r f) ∂_r ] Υ_s
+ [ 1/sinθ∂_θ (sinθ∂_θ) + 2 i s θ/sinθ∂_ϕ.
. + 1/sin^2θ∂^2_ϕ -s -s^2 ^2θ ] Υ_s
+ [ s r^2 f” + 4 s r f' + 2 s f ] Υ_s=0 .
Here, Υ_s represents a general perturbation of spin s, defined by the NP scalars relative to the respective perturbation. To not make the notation too heavy, we drop the l and m indices labelling the field mode, so Υ_s is understood to really mean Υ^lm_s. We note that Eq. (<ref>) is separable if one makes the following wave ansatz:
Υ_s= ∑_l,m e^-i ω t e^i m ϕ S^l_s(θ) R_s(r) ,
where ω is the perturbation frequency, l is the angular node number, and m is the azimuthal node number.
The functions S^l_s(θ) contribute to defining the so-called spin-weighted spherical harmonics S^s_l,m(θ, ϕ)=∑ S^l_s(θ) e^imϕ, satisfying the following equation <cit.>:
( 1/sinθ∂_θ(sinθ ∂_θ)+^2θ ∂_ϕ^2 .
. + 2isθ/sinθ∂_ϕ+s-s^2^2θ+λ_l^s ) S_l,m^s=0 ,
where λ_l^s≡ l(l+1)-s(s+1) is the separation constant. For the spin 0 case, these functions reduces to the usual spherical harmonics S_l,m^0=Y_l,m.
Analogously to the Schwarzschild and Kerr BH cases <cit.>, the decoupled radial Teukolsky equation takes the following form <cit.>:
1/Δ^s(Δ^s+1R'_s)'
+(ω^2r^2/f+2iω sr-isω r^2f'/f+s(Δ”-2)-λ_l^s)R_s=0 ,
where Δ(r)≡ r^2f(r) and ' ≡∂_r. We set in purely ingoing boundary conditions, so the asymptotic solutions of Eq. (<ref>) are given by:
R_s ∼ R^in_s e^-iω r^⋆/r+ R^out_s e^iω r^⋆/r^2s+1 (r→∞)
R_s ∼ R^hor_s Δ^-s e^-i ω r^⋆ (r → r_H) ,
where r^⋆ is the tortoise coordinate defined by:
dr^⋆/dr=1/f(r) .
We note that r^⋆→ r for large values of r, given that the metrics we consider are asymptotically flat.
In general, numerical integration methods are required to compute GBFs for general space-times, and this holds for our tr-symmetric RBHs as well. In our work, we make use of the so-called shooting method (see Appendix <ref> for further details), which has already been successfully applied to these types of calculations in several earlier works <cit.>.
To begin with, we rewrite Eq. (<ref>) in terms of the rescaled coordinate x, given by the following:
x ≡r-r_H/r_H ,
where r_H is the largest real root of f(r)=0. With this substitution Eq. (<ref>) is rewritten as follows:
x^2(x+1)^3 f R̈_̈s̈
+ (s+1) x(x+1) ( 2(x+1)f+(x+1)^2 ḟ ) Ṙ_̇ṡ
+ V(ω,x)R_s=0 ,
where ≡∂_x, and V(ω,x) is given by:
V(ω,x) = ( ω^2 r_H^2 (x+1)^2/f + 2 i s (x+1) ω.
- i s r_H (x+1)^2 ḟ/fω + s ( 2 f + 4 (x+1) ḟ.
. + (x+1)^2 f̈ -2 ) -l(l+1) + s(s+1) ) x (x+1) .
In order to further simplify the problem, we work in units of horizon radius and therefore set r_H=1, so that r=x+1. In these units, the metric functions of the four RBHs under consideration are given by the following:
f_B(x)=1-(1+ℓ^2)^3/2(x+1)^2/ ( ℓ^2+(x+1)^2 ) ^3/2 ,
f_H(x)=1-(x+1)^2/(1-ℓ^2) ( (x+1)^3 - ℓ^2/ℓ^2-1 ) ,
f_GCSV(x)=1- e^ℓ - ℓ/x+1/x+1 ,
f_H-l(x)=1-(1+ℓ-ℓ^2)(x+1)^2/ ( (x+1)^3 + ℓ - ℓ^2 (x+1) ) ,
for the Bardeen, Hayward, GCSV, and Hayward-like space-times respectively.
Setting purely ingoing boundary in proximity of the horizon, the solutions to Eq. (<ref>) can be expressed in the form of a Taylor expansion as follows <cit.>:
R_s(x)= x^-s- i ω/τ∑_n=0^∞ a_n x^n ,
where i ω / τ is a function of the field spin and the regularizing parameter, and also depends on the space-time in question. We refer the reader to Appendix <ref> for further details.
The a_n coefficients can be determined by substituting Eq. (<ref>) in Eq. (<ref>) and iteratively solving the resulting algebraic equations. The near-horizon solution is then used to set the boundary conditions and numerically integrate the radial equation up to large distances, where the general form of the solution is the following:
R(x) R^in_s e^-i ω x/x+R^out_s e^i ω x/x^2s+1 .
The GBFs can then be computed from the _s R^l m_in(ω) coefficient. More specifically, the normalization of the scattering problem is set by requiring a_0=1. With this normalization, the GBFs then read:
Γ^s_l m(ω)=δ_s | _s R^l m_in(ω)|^-2 ,
where the coefficient δ_s is given by:
δ_s = τ i e^i π s (2 ω)^2s-1Γ ( 1-s- 2 i ω/τ ) /Γ ( s-2 i ω/τ )
Using the method discussed above, we compute the GBFs for perturbations of different spin on the backgrounds of the four RBH space-times discussed earlier, for different values of the regularizing parameter ℓ. In the specific case s=1, we have checked that calculating the GBFs up to l=4 is sufficient for our purposes. The GBFs we calculate are then used to characterize the Hawking evaporation spectra, as we will discuss shortly.
§.§ Evaporation spectra
We now discuss our computation of the photon spectra resulting from Hawking evaporation of the regular BHs discussed previously. In what follows, we only account for the primary photon spectrum. Nevertheless, we have checked that in the mass region of interest the impact of the secondary component of the spectrum, i.e. that resulting from the decay into photons of other unstable particles which are also produced during the evaporation process, is negligible.
The Hawking radiation rate (number of particles emitted per unit time per unit energy) of a given particle species i with spin s, as a result of Hawking evaporation, is given by the following <cit.>: [This expression implicitly assumes that the particles emitted by the BH are not coupled to the regularizing parameter ℓ, an assumption which is reasonable.]
d^2N_i/dtdE_i=1/2π∑_l,mn_iΓ^s_l,m(ω)/ e^ω/T± 1 ,
where n_i is the number of degrees of freedom of the particle in question, ω=E_i is the mode frequency (in natural units), Γ^s_l,m are the GBFs discussed previously, and we have implicitly set k_B=1. Note that the plus (minus) sign in the denominator is associated to fermions (bosons). Following the methodology discussed in Sec. <ref>, we calculate the GBFs within all the BH space-times in question for photons (s=1), up to l=4 (note that the angular node number l should not be confused with the regularizing parameter ℓ). We have checked that adding higher l modes does not appreciably improve the resulting spectra.
We show examples of the resulting evaporation spectra in Figs. <ref>, <ref>, <ref>, and <ref>. The spectra obviously depend on the mass of the evaporating PBH, which we have set to M_pbh=10^13 kg, as it sits roughly in the middle of the mass range of interest. Nevertheless, we stress that the features we discuss below do not depend on the chosen mass. The resulting spectra all peak approximately between 5 MeV and 10 MeV. For the Bardeen, Hayward, and GCSV RBHs we observe that an increase in the regularizing parameter ℓ leads to a decrease in the intensity of the spectra at all energies. These behaviours are consistent with the temperature evolution shown in Fig. <ref>, although we stress that studying the temperature alone is not sufficient to draw these conclusions as the GBFs also play a key role in determining the shape and intensity of the resulting spectra, as Eq. (<ref>) makes very clear. For the Bardeen and Hayward RBHs the position of the peak in the spectrum is only mildly affected by the regularizing parameter, an increase in which pushes the peak towards slightly lower energies. On the other hand, for the GCSV BH an increase in the regularizing parameter pushes the peak towards higher energies. We do not exclude that this different behaviour may be related to the type of core being considered, and we defer a more detailed investigation of this point to future work. At any rate, we expect that the behaviour of the spectra discussed above should lead to constraints on f_pbh which are loosened for these classes of primordial RBHs.
Contrarily to what we observe for these three RBHs, we see that the intensity of the spectra for Hayward-like RBHs substantially increase with increasing ℓ for sufficiently large energies E ≳ O(MeV), whereas for lower energies they very slightly decrease. This behaviour agrees qualitatively with the expectations which motivated us to construct the Hayward-like RBH in first place, and with the temperature evolution shown in Fig. <ref>. In addition, we observe that an increase in the regularizing parameter leads to a much richer structure in the spectra: the observed multi-peak structure is the result of the contribution of different l modes in the emission being disentangled more clearly. A similar behaviour is actually observed in the Kerr BHs evaporating spectra which, by virtue of their rotation, lead to a more complex spectrum where the contribution of high-l modes, while still subdominant with respect to the l=1 one, emerges more clearly (although we have explicitly checked that adding l>4 modes has virtually no effect on our results). In the case of Hayward-like RBHs, we remain agnostic as to the physical reason why such a structure is observed, and leave a more detailed study to follow-up work.
§.§ Evaporation constraints
The spectra calculated in Sec. <ref> are then used to set evaporation constraints on f_pbh(M) ≡Ω_pbh/Ω_dm, the fraction of DM in the form of PBHs, where Ω_pbh and Ω_dm are the PBH and DM density parameters respectively. Specifically, the computed spectra are used to obtain predictions for the flux of photons resulting from Hawking evaporation, which are then directly compared against measurements of the extragalactic photon background across a wide range of energies (see e.g. Ref. <cit.> for a recent review). Evaporation constraints are the dominant ones in the 10^10 kg≲ M_pbh≲ 10^15 kg mass range: the lower limit of the range is set by the requirement that PBHs have not yet evaporated at the time of recombination, whereas the upper limit is defined by measurements of the diffuse extragalactic γ-ray background (EGRB) in the energy range 100 keV≲ E_γ≲ 5 GeV, given that the intensity of the Hawking radiation flux is inversely proportional to the mass of the evaporating BH. In what follows, we will direct our attention exclusively to PBHs for which M_pbh≳ 10^12 kg: these have yet to fully evaporate today and, having formed deep during the radiation domination era, are therefore excellent non-baryonic DM candidates.
We work under the commonly adopted assumption that PBHs are isotropically distributed on sufficiently large scales. Therefore, the flux resulting from their evaporation and reaching us today is given by the redshifted sum of the contributions from all evaporating PBHs in our Universe, and can be used to constrain the average extragalactic distribution of DM in the form of PBHs. We also work within the (also commonly adopted) approximation of monochromatic mass distributions (which can be expected if the formation mechanism arises from an amplification of the power spectrum at a very specific scale), although the effect of extended mass distributions is the subject of active research <cit.>. Finally, as discussed earlier, we only consider the primary photon contribution, as the secondary component resulting from the decay into photons of other unstable particles is verified a posteriori to be negligible given the mass range of interest. While all these are clearly approximations, albeit widely adopted ones, we are confident that they are appropriate given the aim of our work. Our main goal is to examine how the limits on f_pbh change when moving from the Schwarzschild PBH framework to that of the regular metrics presented in Sec. <ref>, potentially opening or closing the asteroid mass window. It is more than reasonable to expect that the shift in constraints relative to the Schwarzschild case δ f_pbh is only weakly affected by the above approximations. In other words we expect these approximations to have similar impacts on the contraints on f_pbh relative to the Schwarzschild BHs and the RBHs discussed in Sec. <ref>, therefore leading to negligible effects on the shift δ f_pbh, in which we are ultimately interested. In any case, such approximations also allow for a more direct comparison to several previous works and therefore we consider them appropriate for our pilot study, but their impact should definitely be explored in follow-up works. Finally, note that we are tacitly assuming that PBHs cluster in the galactic halo in the same way as other forms of DM (unless they are extremely large, which is not the case for the mass range of interest).
In what follows, we therefore assume that PBHs all have the same initial mass M_pbh. Following Ref. <cit.> we approximate the number of emitted photons in the logarithmic energy bin Δ E_γ≃ E_γ as being given by Ṅ_γ(E_γ) ≃ E_γ(dṄ_γ/dE_γ). The emission rate of photons from Hawking evaporation per volume at a cosmological time t is then given by <cit.>:[In the original paper M_pbh is assumed to be a function of time, due to the evaporation process. However, for the PBH masses considered in the present work (M > 10^12kg) we can safely assume M to be roughly constant during the evaporation process <cit.>.]
dn_γ/dt(E_γ,t) ≃ n_pbh(t)E_γd^2N_γ/dt dE_γ(M_pbh,E_γ) ,
where n_pbh(t) is the PBH number density at time t. By integrating and taking into account the redshift scaling of the photon energy and density we end up with:
n_γ 0(E_γ 0)
= n_pbh(t_0) E_γ 0∫^t_0_t_⋆ dt(1 + z) d^2N_γ/dt dE_γ(M_pbh,(1+z)E_γ 0)
= n_pbh(t_0) E_γ 0∫^z_⋆_0dz/H(z)d^2N_γ/dt dE_γ(M_pbh,(1+z)E_γ 0)
where t_0 denotes the present time, t_⋆ and z_⋆ are respectively the cosmic time and redshift at recombination, and H(z) is the expansion rate. Finally, n_γ 0(E_γ 0) is the present number density of photons with energy E_γ 0. The resulting photon flux (more properly, the rate of photons per unit time per unit area per unit solid angle) is then given by:
I(E_γ 0) ≡c/4πn_γ 0(E_γ 0) .
It is this quantity which can then be directly compared against observations.
We assume a spatially flat ΛCDM cosmological model in specifying the expansion rate entering into Eq. (<ref>), with the same cosmological parameters as in Ref. <cit.>. This allows us to robustly cross-check our Schwarzschild constraints on f_pbh against those reported in the seminal Ref. <cit.>, although we stress that our constraints are very stable against reasonable changes in the values of the cosmological parameters. Once the cosmological model is fixed, all the relevant quantities in Eq. (<ref>) are known except for the present-day PBH number density, n_pbh(t_0), which can be constrained from EGRB observations and is ultimately related to f_pbh. More specifically, for any given value of the PBH mass M_pbh, through Eqs. (<ref>,<ref>,<ref>) we can compute the unnormalized photon flux I(E_γ 0)/n_pbh(E_γ 0), and adjust the normalization n_pbh(E_γ 0) by comparing against EGRB observations (as we will explain shortly). This procedures gives us an upper limit on n_pbh(E_γ 0), which can be translated into an upper limit on f_pbh as follows:
f_pbh(M_pbh) ≡Ω_pbh/Ω_dm = n_pbh(t_0)M_pbh/ρ_crit,0Ω_dm ,
where ρ_crit,0=3H_0^2/8π G is the present-day critical density, with H_0 the Hubble constant, and we recall that this procedure is done for various values of M_pbh.
We compare our theoretical predictions against various measurements of the EGRB. Specifically, we use observations of the EGRB from the HEAO-1 X-ray telescope in the 3-500 keV range <cit.>, the COMPTEL imaging Compton γ-ray telescope in the 0.8-30 MeV range <cit.>, and the EGRET γ-ray telescope <cit.>. A few comments are in order concerning the adopted datasets. While these are by now a couple of decades old, they basically still represent the state-of-the-art in the energy range of interest. One could entertain other observations, including local galactic measurements of the galactic γ-ray background <cit.>, positron flux <cit.>, 0.511 MeV annihilation radiation <cit.>, and various other sources. While these galactic observations could give potentially stronger limits, they depend strongly on the form of the PBH mass function (assumed to be monochromatic in our study), as well as the clustering properties of these PBHs. On the other hand, our limits on f_pbh are effectively testing the average extragalactic distribution of DM. Finally, other measurements of the EGRB are available, e.g. from Fermi-LAT <cit.>, but these are mostly important for energy ranges larger than the ones of interest, and therefore for PBHs lighter than the ones we are considering. Therefore, we believe the choice of datasets (which is the one adopted in several works estimating evaporation limits on PBHs) is appropriate given the objective of our study.
To set upper limits on n_pbh(t_0) – and therefore f_pbh through Eq. (<ref>) – we adopt the simple method first explained in the seminal Ref. <cit.>, and later adopted in most of the works examining constraints on PBHs from the EGRB. Specifically, for each value of M_pbh, and for given values of the regularizing parameter ℓ, the maximum allowed value of f_pbh is determined by requiring that the predicted photon flux does not overshoot any of the ERGB datapoints by more than 1σ. An example is shown in Fig. <ref> for a Bardeen PRBH with regularizing parameter ℓ=0.3 r_H: for each of the mass values M_pbh represented, the upper limit on f_pbh is set as soon as the first datapoint is overshot. As is clear from the Figure, different PBH masses result in different datapoints being overshot. For each of the PRBHs considered, we use this procedure to determine upper limits on f_pbh for fixed, representative values of ℓ (ℓ/r_H=0.15, 0.3, and 0.45 for all BHs, as well as 0.6 and 0.75 only for the Hayward-like BH), comparing the results to the Schwarzschild case which is recovered when ℓ=0. [Clearly, from the statistical point of view, more precise analyses are possible. For instance, one could construct a metric to be minimized (χ^2 or similar), or adopt a fully-fledged Bayesian approach exploring the joint M-f_pbh-ℓ posterior. Nevertheless, we believe our approach is sufficient for the purposes of our work, for several reasons. First and foremost, as with the approximations discussed earlier, adopting this method allows for a more direct comparison to several previous works. Furthermore, for most of these older datasets, often only the datapoints shown in Fig. <ref> are available, with no further available details on aspects which would be required to properly build a χ^2 or likelihood (e.g. correlation between the datapoints, instrumental details, and so on). Finally, we expect that the relative shift in f_pbh limits with respect to their Schwarzschild counterparts will be largely unaffected by the adopted methodology. For all these reasons, and especially being ours a pilot study, we believe the adopted methodology is appropriate for the purposes of our work.] We note that the exact origin of the EGRB is currently a matter of debate <cit.>: although it is believed that distant astrophysical sources such as blazars give a major contribution to the EGRB, there is no complete consensus on the level of this contribution. In this light, our approach of simply requiring that the PRBH evaporation contribution to the EGRB does not exceed any observed datapoint is rather conservative (given that there could in principle be a PBH contribution to the EGRB, should it be conclusively determined that known astrophysical sources cannot fully account for the latter).
§ RESULTS
For each of the PRBHs discussed in Sec. <ref>, we now proceed to derive upper limits on f_pbh as a function of the PRBH mass M_pbh, for different values of ℓ, using the methodology presented in Sec. <ref>. The results are shown in Figs. <ref>, <ref>, <ref>, and <ref> for the Bardeen, Hayward, GCSV, and Hayward-like BHs respectively. For each case, we also plot the constraints on f_pbh for the ℓ=0 case (blue solid curve in all the Figures), which correspond to the standard Schwarzschild PBH scenario widely studied in the literature. As a sanity check, we have verified that our ℓ=0 constraints exactly recover those of the seminal Ref. <cit.>. It is worth noting that, for any given value of ℓ, the value of M_pbh corresponding to the upper right edge of the f_pbh constraints (i.e. the value of M_pbh for which the limit reads f_pbh<1) marks the lower edge of the (modified – either enlarged or contracted) asteroid mass window.
We begin by discussing the Bardeen, Hayward, and GCSV PRBHs, for which we saw earlier that the temperature and photon spectra decrease in intensity with increasing regularizing parameter ℓ (see the discussion in Sec. <ref>, and Figs. <ref>–<ref>). As we could have expected, this behaviour leads to overall looser constraints on f_pbh (for any given M_pbh) relative to the standard limits reported for Schwarzschild PBHs in the literature. In the case of near-extremal Hayward PRBHs (ℓ=0.45r_H) this behaviour is somewhat enhanced compared to the near-extremal Bardeen and GCSV PRBHs, with the upper limits on f_pbh approximately three orders of magnitude looser than the corresponding Schwarzschild ones: again, this is somewhat unsurprising when comparing Fig. <ref> to Figs. <ref> and <ref>. This could also have been expected from Fig. <ref>, noting that the temperature of Hayward BHs decreases more rapidly with increasing ℓ/r_H relative the Bardeen and especially GCSV ones. Although the temperature is not the only factor at play in determining the resulting evaporation spectrum, given that the GBFs also play a key role as per Eq. (<ref>), it is reassuring that the temperature behaviour observed in Fig. <ref> is qualitatively reflected in the limits on f_pbh we derive.
For the Hayward-like BH, we observe exactly the opposite trend, with overall tighter constraints on f_pbh relative to the Schwarzschild ones. Again, such a behaviour is in line with expectations given the behaviour of the temperature observed in Fig. <ref>, and of the spectra as shown in Fig. <ref>. The relative shift in f_pbh constraints for a given value of M_pbh and ℓ/r_H is somewhat less dramatic than what we observed for the Bardeen, Hayward, and GCSV BHs, with shifts of less than two orders of magnitude.
As a result of the shifts discussed above, the lower edge of the asteroid mass window where PBHs could make up the entire DM component is modified for all four metrics considered. We recall that in the Schwarzschild case, the lower edge of the window lies at M_pbh≃ 10^14 kg. For the Bardeen, Hayward, and GCSV PRBHs, the looser constraints on f_pbh result in the asteroid mass window further opening up by approximately half a decade in mass or more. The maximum extension of the window is reached for the Hayward PRBH closer to extremality, in which case the lower edge decreases by about an order of magnitude to M_pbh≃ 10^13 kg. The opposite behaviour is of course observed for Hayward-like PRBHs, in which case the asteroid mass window further closes down, although by a lesser extent with respect to the previous cases. For instance, for ℓ/r_H=0.75, the lower edge increases to M_pbh≃ 2× 10^14 kg. Overall, we therefore observe that considering primordial regular BHs in place of the standard Schwarzschild ones can move the resulting constraints on f_pbh in either direction, further opening up or closing down the asteroid mass window, with the allowed region for the window lower edge spanning over a decade in mass, at least for the PRBHs considered.
Three comments are in order before concluding. Firstly we note that, for a given PRBH space-time, the curves describing the f_pbh(M) limits are approximately, but not exactly parallel to the Schwarzschild ones (blue solid curves in Figs. <ref>–<ref>). The reason is simply that, as ℓ is increased, the datapoint shown in Fig. <ref> which is first being overshot and therefore responsible for determining the f_pbh limit can potentially change (in part due to the spectrum slightly changing shape, especially in the Hayward-like case where we saw that an increase in ℓ lead to a richer peak structure due to the effect of different l modes).
Next, the constraints we have determined on f_pbh at a fixed value of ℓ/r_H implicitly assume that all PRBHs in the Universe carry the same value of “hair” parameter ℓ. However, particularly given our agnostic stance with regards to the origin of these space-times, in principle the value of ℓ/r_H can vary from PRBH to PRBH. To make an analogy, let us assume for a moment that Reissner-Nordström BHs are astrophysically relevant. Then, since the electric charge Q is not tied to a universal parameter of the underlying Einstein-Maxwell Lagrangian, there is no reason to expect it to carry the same value across all BHs. In the language of Ref. <cit.>, the regularizing parameter for all four RBHs considered is a “specific hair” rather than an “universal hair” (see Ref. <cit.> for various examples of BH solutions carrying universal hair), unless one were able to tie ℓ to some fundamental parameter of the underlying theory, which however is not the case in the phenomenological approach we are following. In principle one should therefore account for the (non-monochromatic) ℓ distribution for PRBHs across the Universe to determine constraints on f_pbh. We see no obvious way of doing this, while noting that such a procedure would most likely result in upper limits on f_pbh lying between the Schwarzschild and extremal cases: this observation suffices for our pilot study, and we defer a more complete investigation to future work.
Our final comment concerns the fact that evaporation limits on the PBH abundance are not the only ones at play. Indeed, as recently summarized in Ref. <cit.>, there are essentially four classes of limits, each of which is relevant in a different mass range: evaporation, lensing, dynamical, and accretion constraints. Constraints from the accretion of background gas at early times are relevant in a completely different mass range (10^30≲ M_pbh/kg≲ 10^37 – see Fig. 7 of Ref. <cit.> and Fig. 10 of Ref. <cit.>). Although these have been derived assuming Schwarzschild PBHs, moving to the PRBH picture we have considered will not shift the relevant mass range by the ≳ 18 orders of magnitude required for these constraints to compete with the evaporation ones, unless the physics of gas accreting around RBHs changes drastically with respect to the standard picture, which appears very unlikely. Dynamical constraints, most of which are associated to the destruction of different astronomical objects by the passage of nearby PBHs, are also relevant in a completely different mass range (10^31≲ M_pbh/kg≲ 10^52 – see Fig. 7 of Ref. <cit.> and Fig. 10 of Ref. <cit.>), and considerations completely analogous to those we made for accretion constraints hold. [Potential exceptions to this mass range for dynamical constraints are those from capture of PBHs by white dwarfs or neutron stars at the centres of globular clusters <cit.>, or from supernovae explosions resulting from transit of a PBH through a white dwarf <cit.>. However, these limits are highly disputed because of uncertainties in the dark matter density in globular clusters <cit.>, or based on the results of hydrodynamical simulations <cit.>. For these reason, we will not consider the previously mentioned limits in our discussion.]
Of potentially more relevance to the present work are lensing constraints, which constrain the abundance of PBHs (and more generally massive compact halo objects) with masses M_pbh≳ 10^12 kg. Indeed, it is lensing constraints which locate the upper edge of the asteroid mass window where PBHs can make up all the DM. Nevertheless, we expect that these constraints should not change when moving from the Schwarzschild PBH framework to the PRBHs considered in this work. Indeed, with all other quantities being fixed (mass of source, relative distances, and so on), lensing constraints only depend on the lens mass M, and are unaffected by the metric structure of the lens. Therefore, at fixed mass M, we can assume that the lensing limits on Schwarzschild PBHs hold for our PRBHs as well. Note that, as already pointed out in footnote 2, the parameter M appearing in the RBH metrics can be unambiguously identified with the RBH mass, just as with the parameter M in the Schwarzschild metric. We can therefore conclude that for the PRBHs we are considering it is only the lower edge of the asteroid mass window which is altered with respect to the Schwarzschild case, but not the upper edge. In other words, space-times for which the lower edge moves towards lower masses (as in the Bardeen, Hayward, and GCSV PRBH cases) genuinely corresponds to an enlarged asteroid mass window, and conversely for the Hayward-like case where the lower edge moves towards higher masses. Therefore, the window where Bardeen, Hayward, and GCSV PRBHs could account for all the DM is larger compared to that of Schwarzschild PBHs.
Other potentially relevant constraints come from μ-distortions in the Cosmic Microwave Background, and gravitational waves (either a stochastic background due to a population of coalescing PBHs or produced via second-order tensor perturbations generated by the scalar perturbations producing the PBHs, or associated to resolved events). The latter are expected to be relevant in a much higher mass range (again, see Fig. 7 of Ref. <cit.> and Fig. 10 of Ref. <cit.>), whereas the former are somewhat dependent on the PBH formation scenario from high-σ tails of density fluctuations, and in particular on the shape of the tail. At any rate, while the focus in the present pilot study has been solely on evaporation constraints from the ERGB, revisiting all these other important sources of constraints (including the ones discussed earlier) is a worthwhile endeavour which we plan to explore in upcoming works.
§ CONCLUSIONS
Over the past decade, primordial black holes have regained tremendous interest as viable dark matter candidates, with the so-called “asteroid mass window” (10^14 kg≲ M_pbh≲ 10^20 kg) where PBHs could potentially account for the entire DM currently still open. Nearly all works on PBHs assume that these are Schwarzschild or Kerr BHs. However, while phenomenologically perfectly valid, such an assumption may stir some unease on the theoretical side, due to the appearance of singularities in these metrics. In our work, we have conducted a pilot study aimed at addressing a question which naturally merges the DM and singularity problems, arguably two among the most important open problems in theoretical physics: “What if PBHs are non-singular”? Our study of primordial regular BHs (PRBHs) has focused on four so-called tr-symmetric metrics (including the well-known Bardeen and Hayward space-times), whereas our companion paper <cit.> considers non-tr-symmetric metrics, including various metrics inspired from loop quantum gravity.
We show that evaporation constraints on f_pbh, the fraction of DM in the form of PRBHs, can be substantially altered in either direction when moving away from the Schwarzschild picture, leading to the asteroid mass window further opening up or closing down depending on the direction of the shifts in f_pbh limits. For three of the PRBHs we considered (the Bardeen, Hayward, and GCSV space-times) the lower edge of the asteroid mass window is shifted by nearly a decade in mass, leading to a larger region of parameter space where PRBHs could account for the entire DM component, which should be the target of the same probes proposed to test the standard window <cit.>. The opposite trend takes place with the Hayward-like BH we constructed, and we have argued that part of this different behaviour can be traced back to the evolution (increase or decrease) of the PRBH temperature as the regularizing parameter is increased. On the other hand, the nature of the regular BH core (de Sitter or Minkowski) does not appear to play a significant role in this sense. Overall, we have shown that the phenomenology of primordial regular BHs can be particularly rich, making the associated simultaneous solution to the DM and singularity problems one worthy of further study.
We remark that the present work (alongside our companion paper <cit.>) should be intended as a pilot study, and there are a huge number of interesting follow-up directions. One interesting avenue for further work involves systematically revisiting other sources of constraints which have been studied in the Schwarzschild PBH case, including but not limited to lensing, accretion, and dynamical constraints: while we have argued that these should not alter our considerations on the asteroid mass window, a detailed study which would allow us to extend our constraints over a much larger region of M_pbh-f_pbh plane is nevertheless in order. In addition, the metrics we have considered are inherently phenomenological in nature, and it would therefore be worth extending our study to non-singular metrics which enjoy a strong theoretical motivation (our companion paper <cit.> goes partially in this direction), including potentially metrics which are coupled to the cosmological expansion. Last but definitely not least, if PBHs are truly regular, one would hope to ascertain this via signatures complementary to those we have studied (for instance through gravitational wave signatures, VLBI imaging, particle motion, or energy extraction). We plan to address these and other related points in follow-up work.
We acknowledge support from the Istituto Nazionale di Fisica Nucleare (INFN) through the Commissione Scientifica Nazionale 4 (CSN4) Iniziativa Specifica “Quantum Fields in Gravity, Cosmology and Black Holes” (FLAG). M.C. and S.V. acknowledge support from the University of Trento and the Provincia Autonoma di Trento (PAT, Autonomous Province of Trento) through the UniTrento Internal Call for Research 2023 grant “Searching for Dark Energy off the beaten track” (DARKTRACK, grant agreement no. E63C22000500003). This publication is based upon work from the COST Action CA21136 “Addressing observational tensions in cosmology with systematics and fundamental physics” (CosmoVerse), supported by COST (European Cooperation in Science and Technology).
§ DETAILS ON THE COMPUTATION OF GBFS
Here we provide a few more details on the computation of GBFs. We recall that we expressed the solutions to the radial Teukolsky equation, Eq. (<ref>), in the form of a Taylor expansion as given by Eq. (<ref>). This is also known as a Frobenius series, being a by-product of a method for solving second-order differential equations named after Frobenius. The method applies to equations which take the following form
u” +p(x) u' + q(x) u=0 ,
in proximity of its singular points, namely those where p(x) and q(x) diverge. One can notice that Eq. (<ref>) can be rewritten in the form of Eq. (<ref>), with one of its singular point being at x=0, i.e. at the event horizon.
To solve the radial Teukolsky equation we therefore proceed as follows:
* We work in units of the event horizon and rewrite Eq. (<ref>) in order to remove the denominators
A(x)R”_s + B(x)R'_s + C(x) R_s=0,
where the functions A(x), B(x), and C(x) are given by the following:
A(x)=f^2 (x+1)^2 ,
B(x)= (s+1)f^2(x) (2(x+2)+(x+1)^2 f'/f) ,
C(x)=+(x+1)^2 ω ^2 +2 i s (x+1) ω f -i s (x+1)^2 ω f'
+s f ( (x+1)^2 f”+4 (x+1) f'+2 f-2 )
-l (l+1) f +s (s+1) f ,
* The lowest power term around x=0 of each coefficient can be written in the following form:
A(x)∼ x^2 τ^2 ,
B(x)∼ x (s+1) τ^2 ,
C(x) ∼ω^2 - i ω s τ ,
where τ=τ(ℓ) depends on the choice of RBH.
* We then build the following characteristic equation:
m(m-1) τ^2 + m (s+1) τ^2 + ω (ω - i s τ)=0 ,
whose solutions are the following:
m_1=-s - i ω/τ , m_2=i ω/τ
* It is then possible to conclude that Eq. (<ref>) admits solutions near the singular point x=0 of the form given by Eq. (<ref>).
Explicitly, for the four RBHs in question, τ is given by the following:
τ_B=1-2 ℓ^2/ℓ^2+1 ,
τ_H= (1-3 ℓ^2) ,
τ_GCSV=1-ℓ ,
τ_H-l= (ℓ-1)^2/1-ℓ(ℓ-1) .
We notice that in the Schwarzschild limit ℓ→ 0, all of the above reduce to τ=1 as one could expect.
|
http://arxiv.org/abs/2409.03372v1 | 20240905092038 | Simple measures to capture the robustness and the plasticity of soil microbial communities | [
"Takashi Shimada",
"Kazumori Mise",
"Kai Morino",
"Shigeto Otsuka"
] | q-bio.PE | [
"q-bio.PE",
"physics.bio-ph",
"physics.data-an"
] |
APS/123-QED
Department of Systems Innovation, School of Engineering, The University of Tokyo
Mathematics and Informatics Center, The University of Tokyo
Graduate School of Agricultural and Life Sciences, The University of Tokyo
Graduate School of Science, The University of Tokyo
Bioproduction Research Institute, National Institute of Advanced Industrial Science and Technology
Interdisciplinary Graduate School of Engineering Sciences, Kyushu University
Graduate School of Agricultural and Life Sciences, The University of Tokyo
Collaborative Research Institute for Innovative Microbiology, The University of Tokyo
§ ABSTRACT
Soil microbial communities are known to be robust against perturbations such as nutrition inputs, which appears as an obstacle for the soil improvement.
On the other hand, its adaptable aspect has been also reported.
Here we propose simple measures for these seemingly contradicting features of soil microbial communities, robustness and plasticity, based on the distribution of the populations.
The first measure is the similarity in the population balance, i.e. the shape of the distribution function, which is found to show resilience against the nutrition inputs.
The other is the similarity in the composition of the species measured by the rank order of the population, which shows an adaptable response during the population balance is recovering.
These results clearly show that the soil microbial system is robust (or, homeostatic) in its population balance, while the composition of the species is rather plastic and adaptable.
Simple measures to capture the robustness and the plasticity of soil microbial communities
Shigeto Otsuka
September 9, 2024
==========================================================================================
Understanding of the stability of ecosystems has been a central question in ecology, often with emphasis on local stability, persistence, permanence, resilience, etc., and its relation to its diversity <cit.>.
It has also been inspiring broader field to consider the robustness of complex systems which consist of many components interacting each other more in general <cit.>.
However, empirical tests of various theoretical predictions on animal-plant ecosystems have been limited within natural experiment, i.e. comparing the systems under different environment and with different history. This is mainly due to the large scales in space and time scale, for conducting controlled experiments.
Microbial ecosystems are one of the best systems to resolve such situation, because they are easy to manipulate and their development can be observed in a shorter time scale.
Bacteria in soil are also known to be extremely abundant and diverse, with estimates generally ranging from 10^6 to 10^10 cells <cit.> and 6,400-38,000 taxa <cit.> per gram of soil. These bacteria play vital roles in biogeochemical cycling, plant growth, and the maintenance of terrestrial ecosystems. Reflecting the importance of soil functions for terrestrial ecosystems, there has been considerable effort invested in understanding the response of soil ecosystems to disturbance <cit.>.
The possibility of such approach was brilliantly illustrated in a pioneering work on the microbial communities of natural soil and natural water <cit.>.
In that study, it was shown that the macroscopic characteristics of the micro-ecosystems such as those dry weight and pH can be modulated by an artificial natural selection process.
And the difference was inherited to the following systems even after quitting the natural selection.
This shows that the microbial ecosystem have both plasticity and robustness or stability in macroscopic sense, while the underlying microscopic mechanism remained largely unknown.
Advance of DNA sequencing technology has enabled microbiologists to obtain detailed information on dozens of microbial communities at one time <cit.>.
This has paved a way to elaborate time-series investigation of microbial ecosystem successions <cit.>.
While many have elucidated the succession of microbial communities in a descriptive manner, for example by cataloguing a plethora of microbial clades (taxonomic groups) that increase or decrease during the ecosystem development <cit.>, others summarize the dynamics of the systems using statistical features such as
co-occurrence network indicators or ARIMA modeling <cit.>.
Other characteristics such as the shape of species abundance distribution <cit.> and the network structures of interactions <cit.> have been also the target of investigation.
However, the interpretation of these features is by no means straightforward.
In this study, we propose simple and comprehensible measures to characterize the state of microbial communities, based on the balance and the composition of populations of the operational taxonomic units (OTUs).
These two measures are independent each other in the sense that there always exists a direction to modulate the community so that one measure changes while the other is kept constant.
It is shown that these measures well capture the conserved aspect (relating to the stability, robustness, etc.) and the adaptive aspect (plasticity) of the microbial community.
§ RESULTS
§.§ System and Preparation
As a model system of community dynamics in response to perturbation, we here use time-series data of soil microbial communities receiving nutrient input <cit.>. This dataset consists of soil microbial community structures (populations of OTUs) at five time points, i.e. 0, 3, 10, 17 and 24 days after initial preparation. As shown in TABLE <ref>, the nutrient input condition consists of the combination of nitrogen source: ammonium chloride (NH4Cl, denoted by A in the labeling of the conditions) or urea (U), carbon source: glucose (G) or cellobiose (C). We also have two types of input schedule: either once in the beginning of the observation (day 0) or addition of one quarter of the amount four times every week (days 0, 7, 14, and 24), denoted by 1 and 4 in the labeling of the condition, respectively. Negative controls without nutrient input were also prepared (“Control”).
This nutrient amendment schemes were designed to cover various intensities and types of perturbations.
Urea and cellobiose represent more slowly acting nutrition sources than ammonium and glucose, since the former two require enzymatic degradation before being catabolized by microbes.
Regarding the input schedules, the one-time and the four-time schemes represent large abrupt perturbation and small but continuous perturbation, respectively.
Since we have three samples for each condition, a soil sample α is specified by the combination of all these parameters as:
α = (t, d, s), where n ∈{ Control, AG1,AG4, AC1, AC4, UG1, UG4, UC1, UC4}, d ∈{ 0, 3, 10, 17, 24 }, and s ∈{ 1, 2, 3 } represent the treatment condition, observation day, and the sample number for each condition, respectively. Because the data d=0 is only for Control, the total number of the soil samples is (1 × 5 × 3) + (8 × 4 × 3) = 111.
§.§ Species Abundance Distributions
We first confirm the basic characters of the control communities by its relative abundance distribution (RSA) of operational taxonomy units (OTU), which we shall call OTU-RSA.
For this purpose, we compare the fittings by classical theoretical distributions for species abundance of ecological systems, namely, log-normal distribution, power-law (Zipf/Pareto) distribution, and the negative binomial distribution.
An example of fitting is shown in the left panel of Fig. <ref>.
OTU-RSA of control soils are found to be better fit by log-normal distribution and power law distributions than by the negative binomial distribution, with a distinctive difference in the log-likelihood and equivalently in AIC, since all these distributions have two parameters.
The obtained fitting parameter values consistently indicate that the OTU-RSA of the soil microbe ecosystem before any disturbance is broad (see Materials and Methods and SI for detail).
This feature is consistent with the systems under natural environment<cit.>, implying that the control soils are keeping such an intact state.
The abundance distributions of the soils under or after nutrition input are also fat-tailed, with typically narrower shape comparing to that of control soils (Fig. <ref>, right panel).
A typical response of RSA to the nutrition input is characterized by an intial decrease of diversity
which is measured by the number of detectable OTUs or by Shannon entropy of OTU abundances
S = - ∑_r p(r) ln p(r),
and an increase of the dominance of top tens of OTUs.
This decrease is followed by the recovery to a smoother distribution which is nearer to that of original control soils (TABLE <ref> and Supplemental Information).
Such changes in RSA clearly illustrates a homeostatic aspect of the soil microbial ecosystem: Fertilization typically first leads to more oligopolistic abundance distribution but the system later shows a resilience toward the original population balance.
§.§ Similarity in RSA shape
The above observation about the typical change in the shape of OTU-RSAs naturally motivates us to define a similarity measure solely based on that.
A simple and natural choice is to take the linear correlation coefficient between the relative OTU abundances of soil α and soil β in the same rank r,
B_αβ^R_B≡∑_r=1^R_B( p^α_r - p^α) ( p^β_r - p^β)/√(∑_r=1^R_B( p^α_r - p^α)^2)√(∑_r=1^R_B( p^β_r - p^β)^2 ),
where p^χ_r represents the relative abundance of the OTU whose abundance rank is r in soil χ, and p^χ = . ∑_r=1^R_B p^χ_r / R_B denotes the average abundance up to the maximum rank R_B to be taken into account. Since OTU-RSA has broad shape, the maximum rank R_B in the following will be set at 1,000 to include the information from as diverse OTUs as possible.
The average B similarities of the soil communities under each treatment condition t observed on day d to the control soil,
B̅(t, d) = ∑_t_α = t, d_α = d; t_β = ControlB_αβ/∑_t_α = t, d_α = d; t_β = Control 1,
are shown in Fig. <ref> (bars).
We can see that the temporal evolutions of the soils in this measure against various inputs (different combinations of nutrition and the different schedules) universally show a homeostatic response, i.e. the similarity to the original control community first drops and that recovers later on.
The response of the soils against the continuous inputs is particularly interesting, because the recovery takes place during the successive nutrition input. This is why we regard it homeostatic, rather than a mere relaxation back to the original state under absence of the perturbation.
§.§ Similarity in OTU abundance rank orders
We next seek a measure to capture a totally different aspect of the soil microbial community.
Because the systems have been confirmed to show robustness in its population balance (the shape of RSA), the remaining degree of freedom in OTU abundance distribution is in its composition.
Suppose that two communities have the same OTU-RSA, then yet two communities can be different if OTUs occupying each rank position are different each other.
In other words, the order of OTUs in relative abundance (whether a particular OTU is major to another particular OTU) is independent of the shape of OTU-RSA distribution.
Our similarity measure for the composition of OTUs, with respect to the OTU-RSA information, is defined by the Kendall's rank correlation between the abundance ranks of top N_C major OTUs of the two communities in comparison, α and β:
𝒞_αβ^N_C≡#_c - #_d/N_C (N_C -1)/2,
where #_c and #_d represent the numbers of concordant OTU rank pairs, i.e.
( r^α_i - r^α_j ) ( r^β_i - r^β_j ) > 0,
and discordant OTU rank pairs, i.e.
( r^α_i - r^α_j ) ( r^β_i - r^β_j ) < 0,
respectively.
Where r^χ_k represents the abundance rank of OTU k in the soil χ. If a certain OTU is not detected in the sample, we replace r^γ_k by a uniform number r_max which is larger than the total number of OTUs. Therefore, for example, if both OTUs i and j are missing in a soil α, any combination of OTU abundance ranks in the other soil β (r_i^β and r_j^β) will not be counted as concordant or discordant.
The parameter N_C is chosen to be large enough to take as much information as possible and not too large to avoid an error from the detection limit.
To determine proper N_C, we first check how many OTUs are commonly observed in the soils under different conditions.
For this purpose, we evaluate the overall popularity (majority) of each OTU by aggregating the all OTU abundances across the available conditions.
We find that almost all of top 300 major OTUs in the aggregated data are found to be shared in all soils (shown in SI, Fig. <ref>).
Some of the OTUs in the following overall majority rank range becomes to be absent (undetectable) in some soils.
Therefore, in the following we choose N_C = 100, OTUs up to this are expected to provide the majority-minority information well above the detection limit of OTU abundances in the current experiment.
One can confirm that the following results are not sensitive to the precise choice of N_C (for example, for N_C = 30 and N_C = 300).
The present measure 𝒞 takes 1 if the rank orders between OTUs (major-minority relations) is kept for all pairs, and it takes a value near 0 if there is no correlation.
It can take negative value if the reversal in the majority order is dominant, ultimately down to -1 when all the majority order pair are flipped.
Because this measure is independent of the shape of RSA distribution, and also because the swap in the ordering of OTU abundances does not change the RSA shape, the previously defined “population-balance measure” B and the currently introduced “composition measure” C can be said to be independent, or orthogonal, to each other.
The OTU composition similarities 𝒞 measured using the top N_C = 100 major OTUs are shown in Fig.<ref>, where bars again represent the average similarity of the samples with treatment condition t and the observation day d to the control soils:
C̅(t, d) = ∑_t_α = t, d_α = d; t_β = ControlC_αβ/∑_t_α = t, d_α = d; t_β = Control 1.
The similarity among the control soil is confirmed to be high in this metric. Compared to that, the average similarities of the soils 3 days after nutrition inputs to the control soil is low, meaning that the composition of OTUs are changed greatly from the control soils.
What is remarkable is that the similarity of the perturbed soils to the control soils shows further decrease in the later time.
This tendency is universally observed for different conditions, and for the the choice of N_C as shown in the Table I and SI.
Recalling that the ℬ similarity shows an increase in the same moment of later time stage, the observed further change in OTU composition takes place during the recovery process of the population balance.
This implies that the response in the composition measure 𝒞 is not just due to a slower response, but is capturing the plasticity of the soil community to the nutrition input.
§ DISCUSSION
In this study, we have proposed simple similarity measures to characterize the response of the soil microbial communities to nutrition inputs.
The first one, ℬ, is based on the balance of the populations (shape of OTU-RSA), and the other one, 𝒞, is based on the composition of OTUs.
The both similarities among the control soil communities are kept during our observation time of 24 days, meaning that the stationary character of the communities under no external inputs is well captured by these macroscopic measures.
When the nutrition inputs are applied, the immediate response of the communities appears as the significant drops in both similarities.
The typical temporal evolutions of the similarities in the later time are, however, different each other.
The ℬ similarities of the perturbed soils to the original communities universally show a recovery in time. The recovery is observed even under the continuous nutrition input.
Therefore this measure universally characterizes the robust (or homeostatic) nature of the soil communities.
In 𝒞, on the other hand, an opposite character of the soil's response is detected. The 𝒞 of the perturbed soils generally show further decrease in time, in the middle of the recovery period in ℬ similarity (10, 17, and 24 days after the onset of nutrition inputs).
This further decrease is universal among different nutrients and input schedule, while the continuous input tends to results in more monotonic decay.
Therefore the plastic (or adaptive) aspect of the soil communities is also well captured by the 𝒞 similarity.
Most of the attempts which have so far been made to disentangle the dynamics of complex microbial community structures rely on beta-diversity metrics between spatially/temporally distant communities. The concept of beta-diversity appears to be two-fold. While it was originally proposed as a between-community difference in the presence/absence of each species <cit.>, currently popular metrics such as Bray-Curtis <cit.> and weighted UniFrac distances <cit.> bear the information on the quantitative balances between community members.
The two indices we have proposed here are approximately congruent with these beta-diversity-based distances, in the sense that both ℬ and 𝒞 are reflected.
This can be seen in the present data that the drastic changes the soil bacterial communities underwent after the initial perturbations (i.e. between days 0 and 3) is visible by the beta-diversity measures (Fig. <ref> in SI). However, in these measures, the rapid recovery in species balances (i.e. change in ℬ) has overwhelmed and masked the long-lasting effect on the change in microbial community members (i.e. change in 𝒞). It is also notable that the contrast between two schemes of nutritional input schedule, one-time and four times, are obscure in ℬ (and in canonical beta-diversity) but clearly highlighted by 𝒞.
Therefore, by decomposing the canonical beta-diversities into two features by ℬ and 𝒞, otherwise mingled differences between incubation conditions are better illuminated.
For the understandings of the robustness and plasticity of interacting communities in more general sense, present discovery can be regarded as a good guiding principle.
Let us consider trying the construction of a dynamical community model, under a constraint that which can explain the coexistance of observed robustness and plasticity. As is evident from the large change in OTU-RSA shape, the homeostatic response to the nutrition input we have observed by ℬ is beyond linear (local) stability argument. Therefore, the observed trajectory should be explained as a transition from an attractor (a stable state) to another attractor triggered by the change of nutrition parameter or a large disturbance in population distribution. A challenge in such approach is that one should reproduce the order of the response: the recovery in OTU balance takes place first and the dynamics of the OTU composition change is slower.
Another possibility to explain the robustness in OTU-RSA, with keeping the degree of freedom of change in the OTU composition, would be a more statistical (entropic) mechanism under some hidden microbial constraints. The neutral theory is a good candidate for this, though it is not straightforward to test that for the observed response. This is because the environment in this study is dynamically changing and hence the key parameters in the framework, such as the birth and death rate, dispersion, and so on of each OTU are not kept. And even if the effect of nutrition input is nicely modeled by the abrupt change in such parameters, to reproduce the responses in the later time remains as a difficult task.
Finally, present finding is also meaningful in the agricultural context. It is important in agriculture to put organic matter such as compost into the soil to maintain the quality of the soil. Fertilizers as well are basically essential for producing crops on agricultural field soil. And, giving carbon source and/or nitrogen source to soil in this way can be regarded as one of the disturbance of soil ecosystem. It was reported that fertilization had a greater effect on soil bacterial community structure than crop rotation and growth stage (Guo et al. 2020). It is also shown that soil organic carbon and total nitrogen are two of the major determinants of community composition (Li et al. 2017). The disturbance of the shape of the OTU-RSA curve for the soil amended with carbon and nitrogen sources (glucose and ammonium chloride, respectively) (Fig. 1) clearly indicate a rapid changes in bacterial community structure. In addition, when combined with traditional analyses such as non-metric multidimensional scaling (NMDS) (SI Fig. 4), the overall picture of the changes that occur in the community structure becomes easier to understand.
§ METHODS
§.§ Sample Preparation and Primary Sequence Analysis
Nucleotide sequence files registered in DDBJ/ENA/GenBank under the accession numbers DRR157393-157518 (DRA007564) and DRR169428-169475 (DRA008081) were retrieved. These sequences were generated by amplicon sequencing targeting 16S rRNA genes in soil microcosms amended with labile carbon and nitrogen sources (partly described in <cit.>). These genes are the commonly-used "marker" for the identification of prokaryotes. The soil microcosm incubation experiment was designed in a full-factorial manner: besides control samples without nutritional amendment, eight treatment groups, differing in carbon source (glucose or cellobiose), nitrogen source (ammonium chloride or urea), and timing(s) of nutritional amendment (weekly amendment over the experiment period or abrupt amendment of quadruple amount at the beginning of incubation), were prepared (Table 1). The total amount of carbon and nitrogen input was the same in all groups. Twelve microcosms were constructed for each group, three of which were destructively sampled after 3, 10, 17, and 24 days of incubation. Additionally, the pre-incubation (i.e. day 0) soil was examined in triplicate. Sampled soil was subjected to total DNA extraction, purification, amplification of partial 16S rRNA gene sequence, and Illumina high-throughput sequencing as described previously <cit.>.
Low-quality (i.e. expected errors of 0.5 bases or more) sequences were removed using USEARCH v9.2.64 <cit.>. Subsequently singleton sequences were removed, and the remaining sequences were clustered into operational taxonomic units (OTUs) at a similarity threshold of 97% or more using UPARSE <cit.>. The OTU representative sequences (i.e. most frequently-observed sequence within those clustered into the OTU) were subjected to UCHIME2 <cit.> to filter out chimera-like OTUs. Finally, all the quality-filtered sequences, including singletons, were mapped to OTU representative sequences using USEARCH v9.2.64. To detect organelle-like OTUs, the OTU representative sequences were taxonomically annotated by RDP classifier <cit.> trained with Greengenes 13_8 clustered at 97% <cit.>. OTUs annotated as "family mitochondria" or "class Chloroplast" were precluded from further analyses.
§.§ Species Abundance Distributions
To characterize the SAD, we compare the fittings by log-normal distribution
P^LN (μ, σ; x)
=
1/σ x √(2π)exp{- (ln x - μ)^2/ 2σ^2},
power-law (Zipf/Pareto) distribution
P^PL (α, θ; x) = αθ^α/(x+θ)^α+1,
and the negative binomial distribution
P^NB (s, p; x) = Γ(x+s)/Γ(x+1) Γ(s) p^s (1-p)^x,
where x denote the OTU abundance.
The obtained fitting parameter values;
lnμ∼lnσ∼ 1.5 for the log-normal distribution,
shape parameter 1.0 < α < 1.5 for Pareto distribution,
and the size 0 < s < 0.75 and the mean p ∼ 0.01 for the negative binomial distribution,
consistently indicate that the OTU-SAD of the soil microbe ecosystem before any artificial nutrition input is very broad, i.e. critical or fat-tailed (see SI for detail).
§ ACKNOWLEDGEMENT
TS was partly supported by JSPS KAKENHI grant number JP18K03449 and JP23K03256.
TS and SO were partly supported by JSPS KAKENHI grant number JP17K19220.
KMo was partly supported by JSPS KAKENHI Grant Number JP20K19885.
§ SUPPLEMENTARY INFORMATION
§.§ NMDS
§.§ ℬ similarity to control soils
§.§ Similarity of the control soils in OTU composition
§.§ Characteristics of RSA distributions
§.§ Sample Accession ID
|
http://arxiv.org/abs/2409.02402v1 | 20240904030337 | Heisenberg-limit spin squeezing with spin Bogoliubov Hamiltonian | [
"Jun Zhang",
"Sheng Chang",
"Wenxian Zhang"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas",
"physics.optics",
"quant-ph"
] |
Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education,
and School of Physics and Technology, Wuhan University, Wuhan, Hubei 430072, China
Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education,
and School of Physics and Technology, Wuhan University, Wuhan, Hubei 430072, China
[Corresponding email:][email protected]
Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education,
and School of Physics and Technology, Wuhan University, Wuhan, Hubei 430072, China
Wuhan Institute of Quantum Technology, Wuhan, Hubei 430206, China
§ ABSTRACT
It is well established that the optimal spin squeezing under a one-axis-twisting Hamiltonian follows a scaling law of J^-2/3 for J interacting atoms after a quench dynamics. Here we prove analytically and numerically that the spin squeezing of the ground state of the one-axis-twisting Hamiltonian actually reaches the Heisenberg limit J^-1. By constructing a bilinear Bogoliubov Hamiltonian with the raising and lowering spin operators, we exactly diagonalize the spin Bogoliubov Hamiltonian, which includes the one-axis twisting Hamiltonian as a limiting case. The ground state of the spin Bogoliubov Hamiltonian exhibits wonderful spin squeezing, which approaches to the Heisenberg limit in the case of the one-axis twisting Hamiltonian. It is possible to realize experimentally the spin squeezed ground state of the one-axis-twisting Hamiltonian in dipolar spinor condensates, ultracold atoms in optical lattices, spins in a cavity, or alkali atoms in a vapor cell.
Heisenberg-limit spin squeezing with spin Bogoliubov Hamiltonian
Wenxian Zhang
September 9, 2024
================================================================
§ INTRODUCTION
Development of the methods for realizing squeezed spin states (SSS) or entangled spin states has been a vigorous frontier in quantum enhanced precision measurement for more than three decades <cit.>, because such strongly correlated quantum states may lead to significant improvement in atomic clock <cit.>, optical or atomic interferometer <cit.>, frequency standards <cit.>, and gravitational wave detection <cit.>. The uncertainty principle predicts that relative measurement precision approaches to standard quantum limit (SQL), J^-1/2 for uncorrelated J particles in a quantum interferometer, and Heisenberg limit (HL), J^-1 for some special squeezed or entangled quantum states <cit.>. Generating atomic spin squeezing has been proposed and demonstrated with a variety of methods, including quantum nondemolition measurements <cit.>, squeezing transferring <cit.>, and dynamical evolution <cit.>, in systems such as atom-cavity interaction systems <cit.>, interacting trapped ions <cit.>, and Bose-Einstein condensates (BECs) <cit.>.
Nonlinear atomic interaction induces spin squeezing after a quench dynamics from a coherent spin state (CSS). It is generally believed that the spin squeezing parameter of J interacting atoms may achieve J^-2/3 under a one-axis-twisting (OAT) Hamiltonian and J^-1 under a two-axis-twisting (TAT) Hamiltonian <cit.>. However, the SSSs generated with the dynamical evolution slip away quickly after the optimal time. The long-lived ground SSSs are thus desired and have been explored recently in a spinor BEC by Chapman's group <cit.> and in a trapped-atom clock system by Reichel's group <cit.>.
We investigate in this Letter the ground SSS of a spin Bogoliubov Hamiltonian. By constructing a bilinear Bogoliubov Hamiltonian with spin operators, which includes the OAT Hamiltonian as a limiting case, we prove analytically and numerically that the spin squeezing of the ground SSS of the spin Bogoliubov Hamiltonian (as well as the OAT Hamiltonian) reaches the HL J^-1 with a weak external field. Our theoretical results may be realized potentially in many experiments, such as dipolar spinor BEC <cit.>, ultracold atoms in an optical lattice<cit.>, and spins in a cavity or in a vapor cell <cit.>.
§ LMG MODEL
The Lipkin-Meshkov-Glick (LMG) model originally introduced in nuclear physics in 1965, has been widely studied in many quantum spin systems, such as spinor BEC, Rydberg-dressed atomic gas and cold atoms in optical lattices <cit.>. The entanglement properties as well as spin squeezing of this model in its ground state have been discussed in many literatures with mean field theory <cit.>, by treating the quantum effect as small fluctuations and using the Holstein-Primakoff transformation in the thermodynamic limit <cit.>. Different from previous methods, we explore an exact analytical approach to diagonalize the Hamiltonian for a certain parameters and focus on the spin-squeezed ground state.
The Hamiltonian of the LMG model reads
H=η(J_x^2+γ J_y^2)+Ω J_z,
where J_x,y,z are collective spin operator, η the interaction strength, γ the anisotropic parameter, and Ω the effective external field. At zero external field Ω = 0, this Hamiltonian becomes an OAT Hamiltonian H_OAT∝ J_x,z^2 if γ=0, 1 and a TAT Hamiltonian H_TAT∝ (J_x^2-J_y,z^2) if γ = -1, 1/2, after dropping the constant term with 𝐉^2 = J_x^2 + J_y^2 + J_z^2.
§ CONSTRUCTION OF BOGOLIUBOV HAMILTONIAN
To diagonalize the Hamiltonian Eq. (<ref>), we construct a diagonal “Bogoliubov" Hamiltonian H_B through the Bogoliubov transformation of the spin lowering/raising operator J_±=J_x ± iJ_y, inspired by the concept of squeezed light <cit.>. We prove that the constructed Bogoliubov Hamiltonian H_B is a special case of the LMG model Eq. (<ref>).
Spin lowering and raising operators obey the canonical commutation relation,
[J_+, J_-]=2J_z and [J_±, J_z]= ∓ J_±. A Dicke state |J,M⟩ (M=-J,-J+1,⋯, J) is the eigenstate of the spin operator J_z, satisfying the relation J_z|J,M⟩=M|J,M⟩. Since J_z=1/2(J_+J_--J_-J_+), a Dicke state is
|J,M⟩ = 1/(J+M)!(2J/J+M)^1/2J_+^J+M|J,-J⟩.
Unlike the annihilation operator â of a photon, the spin lowering operator J_- has only one eigenstate |J,-J⟩, which is the vacuum state and satisfies J_-|J,-J⟩=0.
Similar to generalized creation and annihilation operators, we define the following generalized spin lowering and raising operators,
A_+=μ^* J_+-ν J_-, A_-=A_+^†=μ J_--ν^∗J_+.
To maintain the commutation relation [J_+, J_-]=[A_+, A_-], we require
|μ|^2-|ν|^2=1. We construct the following diagonal Hamiltonian using A_±, the Bogoliubov Hamiltonian,
H_B = A_+A_-.
By assuming μ=coshθ and ν=e^iφsinhθ, one finds
H_B
=J_z + cosh2θ(J_x^2+J_y^2)
-cosφsinh2θ(J_x^2-J_y^2)
-i/2sinφsinh2θ(J^2_+-J^2_-). When φ=π, we obtain a specific spin Bogoliubov Hamiltonian as
H_B = J_z + cosh2θ(J_x^2+J_y^2) + sinh2θ(J_x^2-J_y^2).
It is obvious that H_B in Eq. (<ref>) is a special case of the LMG Hamiltonian Eq. (<ref>), where γ = exp(-4θ) and η/Ω = exp(2θ) must be satisfied. For a large enough θ, the Bogoliubov Hamiltonian becomes effectively the OAT Hamiltonian, H_OAT = exp(2θ) J_x^2.
§.§ Spin vacuum state
The ground state of the Bogoliubov Hamiltonian H_B is the eigenstate of the operator A_-,which is a spin vacuum state satisfying
A_-|χ⟩ = 0|χ⟩.
This state can be expanded in the Dicke basis as |χ⟩=∑_M=-J^JC_M |J,M⟩. By substituting the definition of A_-, we find the following recursion relation
C_M+2=ν^∗/μC_M√((J-M)(J+M+1)/(J-M-1)(J+M+2)).
It is straightforward to calculate the general formula in terms of the coefficient C_-J,
C_-J+2K = (ν^∗/μ)^KJ!/(J-K)!K!√((2J-2K)!(2K)!/(2J)!)C_-J,
K=0,1,⋯,J. Since ⟨J,J|A_-|χ⟩=0 and C_J-1=0, it is easy to find that C_-J+2K+1=0 for K=0,1,⋯,J-1. By further utilizing the normalization condition ∑_K=0^J|C_-J+2K|^2=1, we obtain
C_-J = 1/√(ℱ(
1/2,-J,1/2-J, |ν/μ|^2))
where ℱ[a,b,c,x] represents the hypergeometric function.
§.§ Spin squeezing of the spin vacuum state
Similar to a photon squeezed state, the spin vacuum state is also spin squeezed. To characterize the spin squeezing, two kinds of spin squeezing parameters were introduced by Kitagawa et al. and Wineland et al. respectively <cit.>,
ξ_S^2=(Δ J_n_⊥)^2/(J/2), ξ_R^2=2J(Δ J_n_⊥)^2/|⟨J⃗⟩ |^2
where subscript n_⊥ refers to an arbitrary axis perpendicular to the mean spin ⟨J⃗⟩, where the minimum of (Δ J_n_⊥)^2 is obtained. The inequality ξ_i^2<1 (i=S,R) indicates that the state is spin squeezed, compared to a CSS with ξ_i^2 = 1. The relation between the two squeezing parameters is ξ_R^2=(J/|⟨J⃗⟩|)^2ξ_S^2. Since J≥|⟨J⃗⟩ | always holds, one finds ξ_S^2 ≤ξ_R^2 <cit.>. The equality is taken at J=|⟨J⃗⟩ |, when a CSS is considered. The spin squeezing parameter ξ_R^2 relates directly to the phase sensitivity of a quantum interferometer, which takes the general form as
Δϕ = (Δ J_n_⊥)/|⟨J⃗⟩ |
=ξ_R/√(J).
Obviously, the phase sensitivity of a CSS is, Δϕ=1/√(J), which is denoted as the SQL <cit.>. By contrast, an SSS is expected to achieve a phase sensitivity below the SQL but above the HL, i.e., 1/J ≤Δϕ < 1/√(J) <cit.>. It follows immediately that 1/J ≤ξ_R^2 <1. On the other hand, ξ_S^2 may approach to zero, not constrained by the HL. For instance, a Dicke state |J,M ≠ 0⟩ has a constant spin size J and zero spin variance along z-direction, resulting in ξ_S^2=0.
To calculate the parameters ξ_S^2 and ξ_R^2 of the spin vacuum state, we need to find spin average ⟨ J_n⟩ and the minimal variance perpendicular to the mean-spin direction (MSD), i.e. Δ J_n_⊥. Since the operators J_x,y are a linear combination of A_±, it follows immediately that ⟨χ|J_x,y|χ⟩ = 0. By further employing the commutator [A_+,A_-]=2J_z, we find that ⟨χ|J_z|χ⟩≠ 0 thus the MSD of the state |χ⟩ is along z-direction. The minimal variance must be in the x-y plane and the covariance matrix is defined as
Γ_xy=([ ⟨ J^2_x⟩ Cov(J_x,J_y); Cov(J_x,J_y) ⟨ J^2_y⟩; ]),
with Cov(J_x,J_y)=(⟨ [J_x, J_y]_+⟩-⟨ J_x⟩⟨ J_y⟩)/2 being the covariance between J_x and J_y, and
[X, Y]_+=XY+YX denoting the anti-commutator. The eigenvalues of the covariance matrix are
λ_± = 1/2[⟨ J_x^2+J_y^2⟩±√(⟨ J_x^2-J_y^2⟩^2+4Cov(J_x,J_y)^2) ].
After a straightforward calculation the eigenvalues are simplified as
λ_±=1/2[(F-K)±√((F-K)^2-G^2)]
where F=J(J+1), G=⟨ J_z⟩ and K=⟨ J^2_z⟩. Obviously we find that min (Δ J_n_⊥)^2 =λ_- <cit.>. The squeezing parameters become
ξ_S^2 = 2λ_-/J,
ξ_R^2 = 2Jλ_-/|⟨ J_z⟩ |^2 = J/2λ_+.
In the limit θ→∞, we approximate (ν^*/μ)^K ≈ 1-2Kexp(-2θ) and the spin average becomes <cit.>
⟨ J_z⟩≈ -J^2 exp(-2θ).
Accordingly, the eigenvalues of covariance matrix are
λ_+ ≈ J^2/2 and
λ_- ≈ (J^2/2) exp(-4θ).
Finally, we find that the squeezing parameters are
ξ_R^2 ≈ 1/J, ξ_S^2≈ J exp(-4θ).
In addition, the Bogoliubov Hamiltonian in Eq. (<ref>) reduces effectively to the OAT Hamiltonian in the limit of θ→∞. Therefore, we prove that the spin squeezing of the ground state for an OAT Hamiltonian is approaching HL, surpassing the constraint of ∝ J^-2/3 for dynamic evolution governed by the same Hamiltonian. Moreover, the resulting ground state of H_B is categorized as generalized intelligent state (GIS) which minimize the Robertson-Schrödinger uncertainty relation <cit.>. It is straightforward to find that λ_+λ_-=G^2/4=⟨ J_z⟩^2/4, indicating that |χ⟩ is an intelligent state <cit.>.
We present in Fig. <ref>(a) the squeezing parameters ξ_R^2 and ξ_S^2, as well as the spin average ⟨ J_z⟩/J, of the state |χ⟩ for a spin size J=1000. These quantities vary with the parameter θ. The coefficients C_M of the ground state |χ⟩ are analytically calculated using the recursion relation Eq. (<ref>) and the coefficient C_-J. It is then straightforward to calculate the spin average ⟨χ| J_z|χ⟩ and the spin squeezing parameters ξ_R^2 and ξ_S^2. We also present the spin squeezing parameters ξ_R^2 and ξ_S^2 of the optimal SSS, which is generated numerically by evolving the system from an initial CSS |J,J⟩ under the TAT Hamiltonian H_TAT = J_x^2-J_y^2 <cit.>. As a comparison, we plot in Fig. <ref>(b) the same quantities of the ground state of a special anisotropic LMG Hamiltonian with γ =0, H_A = η J_x^2 + Ω J_z. As η≫Ω, the Hamiltonian H_A effectively reduces to the OAT Hamiltonian. Because it is challenging with analytical method, the ground state of H_A is calculated numerically for J=1000.
As shown in Fig. <ref>(a), the spin squeezing parameter ξ_R^2 approaches to the HL quickly in the large θ region (θ > 3). The parameter ξ_S^2 and the spin average ⟨ J_z⟩ decrease exponentially to zero approximately described by Eqs. (<ref>) and (<ref>), respectively. In the small θ region (θ < 3), the spin average is almost a constant while the spin squeezing parameters ξ_S,R^2 decrease exponentially. This feature is especially useful for quantum enhanced metrology <cit.>. At θ≈ 3, the squeezing parameters ξ_S,R^2 of the ground state |χ⟩ and of the optimal SSS of H_TAT are close to the HL. On the other hand, the spin averages are quite different, ⟨χ| J_z |χ⟩ = -0.89J and |⟨ SSS| J_z |SSS⟩ = 0.59J. Similarly in Fig. <ref>(b), the spin squeezing parameters ξ_R^2 also approaches to the HL as η/Ω becomes large. Both panels (a) and (b) indicate that the spin squeezing of the ground state of the OAT Hamiltonian is at the HL, in stark contrast to earlier understanding <cit.>.
A typical ground state of the Bogoliubov Hamiltonian |χ⟩ is presented in Fig. <ref>(a) for θ=3. As a comparison, the optimal spin squeezed state |SSS⟩ under the TAT Hamiltonian is also shown. Both |χ⟩ and |SSS⟩ are real function. Besides the odd M coefficients C_M being zero, the even M coefficients of |SSS⟩ are positive while that of |χ⟩ are alternatively positive and negative. Clearly, these two states are very different, though the spin squeezing parameter ξ_R^2 of them is close to the HL. As shown in the inset, the state |χ⟩ at θ = 9 is similar to a Dicke state |ψ_D⟩ with J_x|ψ_D⟩ = 0. We then plot the infidelity of the state |χ⟩ and |ψ_D⟩ in Fig. <ref>(b) with 1-F = 1-|⟨χ|ψ_D⟩|^2. As θ increases to a large value, the infidelity approaches to zero exponentially, indicating the spin squeezed ground state |χ⟩ of the OAT Hamiltonian is in fact a Dicke state.
Here we note that the spin squeezing parameter ξ_R^2 of the Dicke state was previously considered illy defined because its spin average is zero and other parameters, like Dicke squeezing parameter and Mach-Zehnder phase sensitivity <cit.>, were introduced to characterize the strong entanglement of the state. These parameters exhibit Heisenberg scaling. However, the spin squeezing parameter ξ_R^2 is in fact well-defined and converges to the Heisenberg limit 1/J, even though the spin average is zero <cit.>. Such a finite (nonzero) spin squeezing parameter ξ_R^2 is due to the exactly same asymptotic behavior of the nominator, 2J(Δ J_n_⊥)^2, and the denominator, |⟨ J_z⟩ |^2, as exp(-2θ) approaches zero (see Eqs. (<ref>-<ref>)). In addition to investigating the squeezing and entanglement properties of squeezed states, one may also employ many-body correlators to explore the many-body non-locality of such states <cit.>.
§ POSSIBLE EXPERIMENTAL PLATFORMS
There are a variety of experimental platforms to potentially realize the HL spin-squeezed ground state |χ⟩. First, a dipolar spinor BEC is an ideal test bed of the LMG model, where the atoms macroscopically occupy internal hyperfine states that can be treated as collective spin states. Under single mode approximation, the spin-dependent effective Hamiltonian of the condensate with magnetic dipole-dipole interaction in an external magnetic field along z-direction is <cit.>
H_e= -D J_z^2+E(J_x^2-J_y^2)+Ω J_z,
where 𝐉=∑_αβâ_α^†𝐅_αβâ_β is the collective condensate spin operator and J_η (η=x, y, z) its η-component. The constants D and E depends on the density and geometry of the condensate and Ω on the external magnetic field. By tuning the isotropic parameter γ=(D-E)/(D+E) and the field Ω = 0, this Hamiltonian takes the form of OAT model, i.e. H_OAT≈ -D J_z^2 for γ≈ 1 and H_OAT≈ 2E J_x^2 for γ≈ 0, and TAT model H_TAT≈ E(J_x^2-J_y^2) for γ = -1, or H_TAT≈ 2E(J_x^2-J_z^2) for γ = 1/2. We have omitted the constant term proportional to 𝐉^2.
Second, nonlinear atom interaction in a two-component BEC is described by an OAT Hamiltonian H_OAT=ηJ̃_̃z̃^2, where J̃_x,y,z is the pseudo-spin operator constituted by two different modes of the condensate and η the inter-mode interaction strength. The system can be regarded as a BEC with atoms in two internal hyperfine (or Zeeman) states or a BEC in a double-well potential. The interaction between the pseudo-spin modes for both cases is fine tunable in experiment <cit.>.
Third, both OAT and TAT Hamiltonians may be experimentally realized in a system of spins in a cavity, where the interaction between the spins and the cavity is described by the Hamiltonian H_TC=ω_mâ^†â+ω_B J_z+g(â^†J_-+âJ_+), where â (â^†) is the annihilation (creation) operator of the cavity field with a mode frequency ω_m, J_α (α=z,±) the collective spin operator, ω_B the Zeeman splitting of the spins, and g the interaction strength. After Schrieffer-Wolf transformation e^RH e^-R with R=(g/Δ_m)(â^†J_--âJ_+), Δ_m=|ω_B-ω_m|, the Hamiltonian of the spin part is approximated to H_OAT≈ (2g^2/Δ_m)J_z^2, indicating that the effective spin interaction is mediated by the cavity mode <cit.>. When the system subjects to a parametric two-photon driving, the spin interaction term becomes a TAT Hamiltonian H_TAT≈[2λ g^2/(Δ_mΔ_c)] (J_x^2-J_y^2) with λ being the driving amplitude and Δ_c the detuning between the cavity and driving frequency <cit.>.
Fourth, ultracold atoms loaded in an optical lattice is described by Bose(Fermi)-Hubbard model in which the site-dependent spin operator can be reduced to a collective spin operator in Mott phase. Both OAT and TAT Hamiltonian may be induced by applying an additional weak light to the system <cit.>.
The effective Hamiltonian depends on the phase ϕ of the light, and becomes an OAT Hamiltonian H_OAT≈∓ħχ_ϕJ_x,z^2, for ϕ=π, 2nπ/N with n=±1,±2,⋯±(N/2-1), where N is the atom number and χ_ϕ the effective coupling strength. When the system is driven by two lights, the effective Hamiltonian becomes a TAT model H_TAT≈ħχ_ϕJ_z^2-ħχ_πJ_x^2. This approach is also suitable for preparation of squeezed state in Rydberg atom arrays
<cit.>.
Fifth, for two species of atoms coupled through dipole-dipole interaction in a vapor cell (represented by their collective spin operator S and J), periodically driving the system can transform the inter-species dipolar interaction into a nonlinear intra-species interaction, resulting in both effective OAT and TAT Hamiltonians <cit.>. As for a continuous driving, the OAT Hamiltonian is realized as H_OAT=χ_effS_z^2, where χ_eff=-g^2/(2Δ_f) with g being the dipolar coupling strength between two species and Δ_f the difference between the magnitudes of two external DC fields applied to the two species of spins respectively. Realization of a TAT Hamiltonian needs an extra AC field applied to the S species, which yields H_TAT=χ_eff(S_x^2-S_y^2).
§ CONCLUSION
We construct a diagonal bilinear spin Bogoliubov Hamiltonian, which includes the one-axis twisting Hamiltonian as a limiting case, by employing the spin lowering and raising operators. We prove analytically and numerically that the ground state of the spin Bogoliubov Hamiltonian (and the one-axis-twisting Hamiltonian) exhibits Heisenberg-limit spin squeezing, ξ_R^2 ∝ J^-1 for an arbitrary spin J in a certain parameter regime, in contrast to previous scaling ξ_R^2 ∝ J^-2/3 under the one-axis-twisting Hamiltonian by quench dynamics. Such a spin-squeezed ground state may be realized experimentally in dipolar spinor BECs, ultracold atoms in an optical lattice, and spins in a cavity or in a vapor cell.
This work is supported by the National Natural Science Foundation of China (Grant No. 12274331), Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302100), and the NSAF (Grant No. U1930201). The numerical calculations in this paper have been partially done on the supercomputing system in the Supercomputing Center of Wuhan University.
88
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Wineland et al.(1992)Wineland, Bollinger, Itano, Moore, and Heinzen]Wineland1992
author author D. J. Wineland, author J. J. Bollinger, author W. M. Itano, author F. L. Moore, and author D. J. Heinzen, @noop journal journal Phys. Rev. A volume 46, pages R6797 (year
1992)NoStop
[Kitagawa and Ueda(1993)]Kitagawa1993
author author M. Kitagawa and author M. Ueda, @noop journal journal Phys.
Rev. A volume 47, pages 5138
(year 1993)NoStop
[Wineland et al.(1994)Wineland, Bollinger, Itano, and Heinzen]Wineland1994
author author D. J. Wineland, author J. J. Bollinger, author W. M. Itano, and author D. J. Heinzen, @noop journal journal Phys.
Rev. A volume 50, pages 67 (year 1994)NoStop
[Leroux et al.(2010a)Leroux, Schleier-Smith, and Vuleti ćć]Leroux2010atomclock
author author I. D. Leroux, author M. H. Schleier-Smith, and author V. Vuleti ćć, https://doi.org/10.1103/PhysRevLett.104.250801 journal
journal Phys. Rev. Lett. volume 104, pages 250801 (year 2010a)NoStop
[Ludlow et al.(2015)Ludlow,
Boyd, Ye, Peik, and Schmidt]Ludlow2015RMP
author author A. D. Ludlow, author M. M. Boyd,
author J. Ye, author
E. Peik, and author
P. O. Schmidt, https://doi.org/10.1103/RevModPhys.87.637 journal journal Rev. Mod. Phys. volume 87, pages 637 (year 2015)NoStop
[Hosten et al.(2016)Hosten,
Engelsen, Krishnakumar, and Kasevich]Hosten2016Nature
author author O. Hosten, author N. J. Engelsen, author R. Krishnakumar, and author M. A. Kasevich, https://doi.org/10.1038/nature16176 journal journal Nature volume 529, pages 505 (year 2016)NoStop
[Pedrozo-Peñafiel et al.(2020)Pedrozo-Peñafiel, Colombo, Shu, Adiyatullin, Li, Mendez, Braverman, Kawasaki, Akamatsu, Xiao, and Vuletić]Pedrozo2020Nature
author author E. Pedrozo-Peñafiel, author S. Colombo, author C. Shu,
author A. F. Adiyatullin,
author Z. Li, author
E. Mendez, author B. Braverman, author A. Kawasaki, author D. Akamatsu, author Y. Xiao, and author V. Vuletić, https://doi.org/10.1038/s41586-020-3006-1 journal journal Nature volume 588, pages
414 (year 2020)NoStop
[Schulte et al.(2020)Schulte, Lisdat, Schmidt, Sterr, and Hammerer]Schulte2020NC
author author M. Schulte, author C. Lisdat,
author P. O. Schmidt, author U. Sterr, and author
K. Hammerer, https://doi.org/10.1038/s41467-020-19403-7 journal journal Nat. Commun. volume 11, pages 5955 (year 2020)NoStop
[Szigeti et al.(2020)Szigeti, Nolan, Close, and Haine]Szigeti2020PRL
author author S. S. Szigeti, author S. P. Nolan,
author J. D. Close, and author S. A. Haine, https://doi.org/10.1103/PhysRevLett.125.100402 journal
journal Phys. Rev. Lett. volume 125, pages 100402 (year 2020)NoStop
[Genovese(2021)]Genovese2021AVS
author author M. Genovese, https://doi.org/10.1116/5.0062114 journal journal AVS Quantum Sci. volume 3, pages 044702 (year
2021)NoStop
[Huelga et al.(1997)Huelga,
Macchiavello, Pellizzari, Ekert, Plenio, and Cirac]Huelga1997PRL
author author S. F. Huelga, author C. Macchiavello, author T. Pellizzari, author A. K. Ekert, author M. B. Plenio, and author J. I. Cirac, https://doi.org/10.1103/PhysRevLett.79.3865 journal journal Phys. Rev. Lett. volume 79, pages 3865 (year 1997)NoStop
[Shaniv et al.(2018)Shaniv,
Manovitz, Shapira, Akerman, and Ozeri]Shaniv2018PRL
author author R. Shaniv, author T. Manovitz,
author Y. Shapira, author N. Akerman, and author
R. Ozeri, https://doi.org/10.1103/PhysRevLett.120.243603 journal
journal Phys. Rev. Lett. volume 120, pages 243603 (year 2018)NoStop
[McKenzie et al.(2002)McKenzie, Shaddock, McClelland,
Buchler, and Lam]McKenzie2002PRL
author author K. McKenzie, author D. A. Shaddock, author D. E. McClelland, author B. C. Buchler, and author P. K. Lam, https://doi.org/10.1103/PhysRevLett.88.231102 journal journal Phys. Rev. Lett. volume 88, pages 231102 (year
2002)NoStop
[Aasi et al.(2013)Aasi,
Abadie, Abbott, Abbott,
Abbott, Abernathy, Adams,
Adams, Addesso, Adhikari, and et al.]Aasi2013NPhot
author author J. Aasi, author J. Abadie,
author B. P. Abbott, author R. Abbott, author
T. D. Abbott, author
M. R. Abernathy, author
C. Adams, author T. Adams, author P. Addesso, author R. X. Adhikari, and author et
al., https://doi.org/10.1038/nphoton.2013.177 journal journal Nat. Photonics volume
7, pages 613 (year 2013)NoStop
[Pezzè et al.(2018)Pezzè, Smerzi, Oberthaler, Schmied, and Treutlein]RMPSmerzi
author author L. Pezzè, author A. Smerzi,
author M. K. Oberthaler,
author R. Schmied, and author P. Treutlein, https://doi.org/10.1103/RevModPhys.90.035005 journal
journal Rev. Mod. Phys. volume 90, pages 035005 (year 2018)NoStop
[Kuzmich et al.(1998)Kuzmich, Bigelow, and Mandel]Kuzmich_1998
author author A. Kuzmich, author N. P. Bigelow, and author L. Mandel, @noop journal journal
Europhys. Lett. volume 42, pages 481
(year 1998)NoStop
[Hald et al.(1999)Hald,
Sørensen, Schori, and Polzik]Hald1999PRL
author author J. Hald, author J. L. Sørensen, author C. Schori, and author E. S. Polzik, https://doi.org/10.1103/PhysRevLett.83.1319 journal journal Phys. Rev. Lett. volume 83, pages 1319 (year
1999)NoStop
[Lukin et al.(2000)Lukin,
Yelin, and Fleischhauer]Lukin2000PRL
author author M. D. Lukin, author S. F. Yelin, and author M. Fleischhauer, https://doi.org/10.1103/PhysRevLett.84.4232 journal
journal Phys. Rev. Lett. volume 84, pages 4232 (year 2000)NoStop
[Huang et al.(2021)Huang,
Chen, Li, Li, Lü, and Liu]Huang2021NPJ
author author L.-G. Huang, author F. Chen,
author X. Li, author
Y. Li, author R. Lü, and author Y.-C. Liu, https://doi.org/10.1038/s41534-021-00505-z
journal journal npj Quantum Inf. volume 7, pages 168 (year
2021)NoStop
[Saito and Ueda(2003)]Ueda2003PRA
author author H. Saito and author M. Ueda, https://doi.org/10.1103/PhysRevA.68.043820 journal
journal Phys. Rev. A volume 68, pages 043820 (year 2003)NoStop
[Takeuchi et al.(2005)Takeuchi, Ichihara, Takano, Kumakura, Yabuzaki, and Takahashi]Takahashi2005PRL
author author M. Takeuchi, author S. Ichihara,
author T. Takano, author M. Kumakura, author
T. Yabuzaki, and author
Y. Takahashi, @noop journal journal Phys. Rev. Lett. volume 94, pages 023003 (year
2005)NoStop
[Leroux et al.(2010b)Leroux, Schleier-Smith, and Vuleti ćć]Leroux2010PRL
author author I. D. Leroux, author M. H. Schleier-Smith, and author V. Vuleti ćć, @noop journal journal Phys. Rev. Lett. volume 104, pages 073602 (year
2010b)NoStop
[Meekhof et al.(1996)Meekhof, Monroe, King, Itano, and Wineland]Wineland1996PRL
author author D. M. Meekhof, author C. Monroe,
author B. E. King, author W. M. Itano, and author D. J. Wineland, https://doi.org/10.1103/PhysRevLett.76.1796 journal journal Phys. Rev. Lett. volume 76, pages 1796 (year 1996)NoStop
[Bohnet et al.(2016)Bohnet,
Sawyer, Britton, Wall,
Rey, Foss-Feig, and Bollinger]Justin2016Science
author author J. G. Bohnet, author B. C. Sawyer,
author J. W. Britton, author M. L. Wall, author
A. M. Rey, author M. Foss-Feig, and author J. J. Bollinger, https://doi.org/10.1126/science.aad9958 journal journal Science volume 352, pages
1297 (year 2016)NoStop
[Estève et al.(2008)Estève, Gross, Weller, Giovanazzi, and Oberthaler]Esteve2008
author author J. Estève, author C. Gross,
author A. Weller, author S. Giovanazzi, and author M. K. Oberthaler, https://doi.org/10.1038/nature07332 journal journal Nature volume 455, pages
1216 (year 2008)NoStop
[Maussang et al.(2010)Maussang, Marti, Schneider, Treutlein, Li, Sinatra, Long, Estève, and Reichel]Maussang2010
author author K. Maussang, author G. E. Marti,
author T. Schneider, author P. Treutlein, author
Y. Li, author A. Sinatra, author R. Long, author J. Estève, and author J. Reichel, https://doi.org/10.1103/PhysRevLett.105.080403
journal journal Phys. Rev. Lett. volume 105, pages 080403 (year
2010)NoStop
[Riedel et al.(2010)Riedel,
Böhi, Li, Hänsch,
Sinatra, and Treutlein]Riedel2010
author author M. F. Riedel, author P. Böhi,
author Y. Li, author
T. W. Hänsch, author
A. Sinatra, and author
P. Treutlein, @noop journal journal Nature volume 464, pages 1170 (year 2010)NoStop
[Hamley et al.(2012)Hamley,
Gerving, Hoang, Bookjans, and Chapman]Hamley2012
author author C. D. Hamley, author C. S. Gerving,
author T. M. Hoang, author E. M. Bookjans, and author M. S. Chapman, @noop
journal journal Nat. Phys. volume 8, pages 305 (year 2012)NoStop
[Berrada et al.(2013)Berrada, van Frank, Bücker,
Schumm, Schaff, and Schmiedmayer]Berrada2013
author author T. Berrada, author S. van Frank,
author R. Bücker, author T. Schumm, author
J.-F. Schaff, and author
J. Schmiedmayer, https://doi.org/10.1038/ncomms3077 journal journal Nat. Commun. volume 4, pages
2077 (year 2013)NoStop
[Strobel et al.(2014)Strobel, Muessel, Linnemann, Zibold, Hume, Pezzè, Smerzi, and Oberthaler]Strobel2014Science
author author H. Strobel, author W. Muessel,
author D. Linnemann, author T. Zibold, author
D. B. Hume, author
L. Pezzè, author A. Smerzi, and author M. K. Oberthaler, @noop journal journal Science volume 345, pages
424 (year 2014)NoStop
[Lücke et al.(2014)Lücke, Peise, Vitagliano, Arlt, Santos, Tóth, and Klempt]Dicke2014PRL
author author B. Lücke, author J. Peise,
author G. Vitagliano, author J. Arlt, author
L. Santos, author G. Tóth, and author C. Klempt, @noop journal journal Phys. Rev. Lett. volume 112, pages 155304 (year 2014)NoStop
[Xin et al.(2023)Xin,
Barrios, Cohen, and Chapman]Chapman2023
author author L. Xin, author M. Barrios,
author J. T. Cohen, and author M. S. Chapman, https://doi.org/10.1103/PhysRevLett.131.133402 journal
journal Phys. Rev. Lett. volume 131, pages 133402 (year 2023)NoStop
[Huang et al.(2023a)Huang, de la Paz,
Mazzoni, Ott, Rosenbusch,
Sinatra, Garrido Alzar, and Reichel]Huang2023PRXQuantum
author author M.-Z. Huang, author J. A. de la
Paz, author T. Mazzoni,
author K. Ott, author
P. Rosenbusch, author
A. Sinatra, author C. L. Garrido Alzar, and author
J. Reichel, https://doi.org/10.1103/PRXQuantum.4.020322 journal journal PRX Quantum volume 4, pages
020322 (year 2023a)NoStop
[Law et al.(1998)Law,
Pu, and Bigelow]Law98
author author C. K. Law, author H. Pu, and author N. P. Bigelow, https://doi.org/10.1103/PhysRevLett.81.5257 journal journal Phys. Rev. Lett. volume 81, pages 5257 (year 1998)NoStop
[Sørensen et al.(2001)Sørensen, Duan, Cirac, and Zoller]Zoller2001nature
author author A. Sørensen, author L.-M. Duan, author J. I. Cirac, and author P. Zoller, https://doi.org/10.1038/35051038 journal journal
Nature volume 409, pages 63
(year 2001)NoStop
[Duan et al.(2002)Duan,
Cirac, and Zoller]Zoller2002PRA
author author L.-M. Duan, author J. I. Cirac, and author P. Zoller, https://doi.org/10.1103/PhysRevA.65.033619 journal journal Phys. Rev. A volume 65, pages 033619 (year 2002)NoStop
[Müstecaplıo ğğlu et al.(2002)Müstecaplıo ğğlu, Zhang, and You]You2002PRA
author author O. E. Müstecaplıo ğğlu, author M. Zhang, and author L. You, https://doi.org/10.1103/PhysRevA.66.033611 journal journal Phys. Rev. A volume 66, pages 033611 (year 2002)NoStop
[Micheli et al.(2003)Micheli, Jaksch, Cirac, and Zoller]Zoller2003PRA
author author A. Micheli, author D. Jaksch,
author J. I. Cirac, and author P. Zoller, https://doi.org/10.1103/PhysRevA.67.013607 journal journal Phys. Rev. A volume 67, pages 013607 (year 2003)NoStop
[Yi et al.(2004)Yi,
You, and Pu]Yi2004PRL
author author S. Yi, author L. You, and author H. Pu, https://doi.org/10.1103/PhysRevLett.93.040403 journal
journal Phys. Rev. Lett. volume 93, pages 040403 (year 2004)NoStop
[Jääskeläinen et al.(2004)Jääskeläinen, Zhang, and Meystre]Meystre2004PRA
author author M. Jääskeläinen, author W. Zhang, and author P. Meystre, https://doi.org/10.1103/PhysRevA.70.063612 journal journal Phys. Rev. A volume
70, pages 063612 (year 2004)NoStop
[Pezzé et al.(2005)Pezzé, Collins, Smerzi, Berman, and Bishop]Pezz2005PRA
author author L. Pezzé, author L. A. Collins, author A. Smerzi,
author G. P. Berman, and author A. R. Bishop, @noop journal journal Phys. Rev. A volume 72, pages 043612 (year 2005)NoStop
[Choi and Bigelow(2005)]Choi2005PRA
author author S. Choi and author N. P. Bigelow, https://doi.org/10.1103/PhysRevA.72.033612 journal journal Phys. Rev. A volume
72, pages 033612 (year 2005)NoStop
[Yi and Pu(2006)]Yi2006PRA
author author S. Yi and author H. Pu, https://doi.org/10.1103/PhysRevA.73.023602 journal journal Phys. Rev. A volume 73, pages 023602 (year 2006)NoStop
[Müstecaplıo ğğlu et al.(2007)Müstecaplıo ğğlu, Zhang, and You]You2007PRA
author author O. E. Müstecaplıo ğğlu, author W. Zhang, and author L. You, https://doi.org/10.1103/PhysRevA.75.023605 journal journal Phys. Rev. A volume 75, pages 023605 (year 2007)NoStop
[Huang et al.(2012)Huang,
Zhang, Lü, Wang, and Yi]Yi2012PRA
author author Y. Huang, author Y. Zhang,
author R. Lü, author
X. Wang, and author
S. Yi, https://doi.org/10.1103/PhysRevA.86.043625 journal journal Phys. Rev. A volume 86, pages 043625 (year 2012)NoStop
[Sørensen and Mølmer(1999)]Sorensen1999PRL
author author A. Sørensen and author K. Mølmer, https://doi.org/10.1103/PhysRevLett.83.2274
journal journal Phys. Rev. Lett. volume 83, pages 2274 (year
1999)NoStop
[Gerbier et al.(2006)Gerbier, Fölling, Widera, Mandel, and Bloch]Gerbier2006PRL
author author F. Gerbier, author S. Fölling,
author A. Widera, author O. Mandel, and author
I. Bloch, https://doi.org/10.1103/PhysRevLett.96.090401 journal
journal Phys. Rev. Lett. volume 96, pages 090401 (year 2006)NoStop
[Ma et al.(2011)Ma,
Wang, Sun, and Nori]MA2011
author author J. Ma, author X. Wang, author C. Sun, and author
F. Nori, https://doi.org/https://doi.org/10.1016/j.physrep.2011.08.003 journal journal Phys. Rep. volume
509, pages 89 (year 2011)NoStop
[Płodzie ńń et al.(2020)Płodzie ńń,
Ko śścielski, Witkowska, and Sinatra]Plodzien2020PRA
author author M. Płodzie ńń, author
M. Ko śścielski, author E. Witkowska, and author A. Sinatra, https://doi.org/10.1103/PhysRevA.102.013328 journal journal Phys. Rev. A volume
102, pages 013328 (year 2020)NoStop
[Hernández Yanes et al.(2022)Hernández Yanes, Płodzie ńń, Mackoit Sinkeviččien ėė, ŽŽlabys, Juzeli u̅ūnas, and Witkowska]Yanes2022PRL
author author T. Hernández Yanes, author M. Płodzie ńń, author
M. Mackoit Sinkeviččien ėė, author
G. ŽŽlabys,
author G. Juzeli u̅ūnas, and author
E. Witkowska, https://doi.org/10.1103/PhysRevLett.129.090403 journal
journal Phys. Rev. Lett. volume 129, pages 090403 (year 2022)NoStop
[Bornet et al.(2023)Bornet,
Emperauger, Chen, Ye,
Block, Bintz, Boyd,
Barredo, Comparin, Mezzacapo,
Roscilde, Lahaye, Yao, and Browaeys]Bornet2023Nature
author author G. Bornet, author G. Emperauger,
author C. Chen, author
B. Ye, author M. Block, author M. Bintz, author J. A. Boyd, author D. Barredo, author T. Comparin,
author F. Mezzacapo, author T. Roscilde, author
T. Lahaye, author N. Y. Yao, and author A. Browaeys, https://doi.org/10.1038/s41586-023-06414-9 journal journal Nature volume 621, pages
728 (year 2023)NoStop
[Dziurawiec et al.(2023)Dziurawiec, Hernández Yanes, Płodzie ńń, Gajda,
Lewenstein, and Witkowska]Dziurawiec2023PRA
author author M. Dziurawiec, author T. Hernández Yanes, author M. Płodzie ńń, author
M. Gajda, author M. Lewenstein, and author E. Witkowska, https://doi.org/10.1103/PhysRevA.107.013311 journal journal Phys. Rev. A volume 107, pages 013311 (year 2023)NoStop
[Agarwal et al.(1997)Agarwal, Puri, and Singh]Agarwal1997PRA
author author G. S. Agarwal, author R. R. Puri, and author R. P. Singh, https://doi.org/10.1103/PhysRevA.56.2249 journal journal Phys. Rev. A volume 56, pages 2249 (year 1997)NoStop
[Dimer et al.(2007)Dimer,
Estienne, Parkins, and Carmichael]Dimer2007PRA
author author F. Dimer, author B. Estienne,
author A. S. Parkins, and author H. J. Carmichael, https://doi.org/10.1103/PhysRevA.75.013804 journal journal Phys. Rev. A volume 75, pages 013804 (year 2007)NoStop
[Morrison and Parkins(2008a)]Morrison2008PRA
author author S. Morrison and author A. S. Parkins, https://doi.org/10.1103/PhysRevA.77.043810 journal journal Phys. Rev. A volume
77, pages 043810 (year
2008a)NoStop
[Morrison and Parkins(2008b)]Morrison2008PRL
author author S. Morrison and author A. S. Parkins, https://doi.org/10.1103/PhysRevLett.100.040403
journal journal Phys. Rev. Lett. volume 100, pages 040403 (year
2008b)NoStop
[Bennett et al.(2013)Bennett, Yao, Otterbach, Zoller, Rabl, and Lukin]Bennett2013PRL
author author S. D. Bennett, author N. Y. Yao,
author J. Otterbach, author P. Zoller, author
P. Rabl, and author
M. D. Lukin, https://doi.org/10.1103/PhysRevLett.110.156402 journal
journal Phys. Rev. Lett. volume 110, pages 156402 (year 2013)NoStop
[Dalla Torre et al.(2013)Dalla Torre, Otterbach, Demler,
Vuletic, and Lukin]Dalla2013PRL
author author E. G. Dalla Torre, author J. Otterbach, author E. Demler,
author V. Vuletic, and author M. D. Lukin, https://doi.org/10.1103/PhysRevLett.110.120402 journal
journal Phys. Rev. Lett. volume 110, pages 120402 (year 2013)NoStop
[Masson et al.(2017)Masson,
Barrett, and Parkins]Masson2017PRL
author author S. J. Masson, author M. D. Barrett, and author S. Parkins, https://doi.org/10.1103/PhysRevLett.119.213601
journal journal Phys. Rev. Lett. volume 119, pages 213601 (year
2017)NoStop
[Zhang et al.(2017)Zhang,
Lee, Kumar, Arnold,
Masson, Parkins, and Barrett]Zhiqiang2017OPtica
author author Z. Zhang, author C. H. Lee,
author R. Kumar, author K. J. Arnold, author
S. J. Masson, author
A. S. Parkins, and author
M. D. Barrett, https://doi.org/10.1364/OPTICA.4.000424 journal journal Optica volume 4, pages 424
(year 2017)NoStop
[Borregaard et al.(2017)Borregaard, Davis, Bentsen, Schleier-Smith, and Sørensen]Borregaard2017NJP
author author J. Borregaard, author E. J. Davis, author G. S. Bentsen,
author M. H. Schleier-Smith, and author A. S. Sørensen, https://doi.org/10.1088/1367-2630/aa8438 journal
journal New J. Phys. volume 19, pages 093021 (year 2017)NoStop
[Groszkowski et al.(2020)Groszkowski, Lau, Leroux, Govia, and Clerk]Groszkowski2020PRL
author author P. Groszkowski, author H.-K. Lau, author C. Leroux,
author L. C. G. Govia, and author A. A. Clerk, https://doi.org/10.1103/PhysRevLett.125.203601 journal
journal Phys. Rev. Lett. volume 125, pages 203601 (year 2020)NoStop
[Groiseau et al.(2021)Groiseau, Masson, and Parkins]Groiseau2021PRA
author author C. Groiseau, author S. J. Masson, and author S. Parkins, https://doi.org/10.1103/PhysRevA.104.053721 journal journal Phys. Rev. A volume
104, pages 053721 (year 2021)NoStop
[Groszkowski et al.(2022)Groszkowski, Koppenhöfer, Lau, and Clerk]Groszkowski2022PRL
author author P. Groszkowski, author M. Koppenhöfer, author H.-K. Lau, and author A. A. Clerk, @noop journal journal Phys.
Rev. X volume 12, pages 011015
(year 2022)NoStop
[Li et al.(2022)Li,
Braverman, Colombo, Shu,
Kawasaki, Adiyatullin, Pedrozo-Peñafiel, Mendez, and Vuleti ćć]Li2022PRXQuantum
author author Z. Li, author B. Braverman,
author S. Colombo, author C. Shu, author
A. Kawasaki, author
A. F. Adiyatullin, author
E. Pedrozo-Peñafiel, author
E. Mendez, and author
V. Vuleti ćć, https://doi.org/10.1103/PRXQuantum.3.020308
journal journal PRX Quantum volume 3, pages 020308 (year
2022)NoStop
[Huang et al.(2023b)Huang, Zhang,
Wang, Hua, Tang, and Liu]Huang2023PRA
author author L.-G. Huang, author X. Zhang,
author Y. Wang, author
Z. Hua, author Y. Tang, and author Y.-C. Liu, https://doi.org/10.1103/PhysRevA.107.042613
journal journal Phys. Rev. A volume 107, pages 042613 (year
2023b)NoStop
[Gil et al.(2014)Gil,
Mukherjee, Bridge, Jones, and Pohl]Gil2014PRL
author author L. I. R. Gil, author R. Mukherjee, author E. M. Bridge, author M. P. A. Jones, and author T. Pohl, @noop journal journal Phys. Rev. Lett. volume 112, pages 103601 (year 2014)NoStop
[Zeiher et al.(2016)Zeiher,
van Bijnen, Schauß, Hild,
Choi, Pohl, Bloch, and Gross]Zeiher2016NP
author author J. Zeiher, author R. van Bijnen,
author P. Schauß, author S. Hild, author
J.-y. Choi, author
T. Pohl, author I. Bloch, and author C. Gross, @noop journal journal Nat. Phys. volume 12, pages
1095 (year 2016)NoStop
[Muniz et al.(2020)Muniz,
Barberena, Lewis-Swan, Young,
Cline, Rey, and Thompson]Muniz2020Nature
author author J. A. Muniz, author D. Barberena,
author R. J. Lewis-Swan,
author D. J. Young, author J. R. K. Cline, author
A. M. Rey, and author
J. K. Thompson, @noop journal journal Nature volume 580, pages 602 (year 2020)NoStop
[Borish et al.(2020)Borish,
Markovi ćć, Hines, Rajagopal, and Schleier-Smith]Borish2020PRL
author author V. Borish, author O. Markovi ćć, author
J. A. Hines, author
S. V. Rajagopal, and author
M. Schleier-Smith, @noop
journal journal Phys. Rev. Lett. volume 124, pages 063601 (year
2020)NoStop
[Defenu et al.(2023)Defenu,
Donner, Macrì, Pagano,
Ruffo, and Trombettoni]Defenu2023RMP
author author N. Defenu, author T. Donner,
author T. Macrì, author G. Pagano, author
S. Ruffo, and author
A. Trombettoni, @noop journal journal Rev. Mod. Phys. volume
95, pages 035002 (year 2023)NoStop
[Dusuel and Vidal(2004)]Vidal2004PRL
author author S. Dusuel and author J. Vidal, @noop journal journal Phys. Rev. Lett. volume 93, pages 237204 (year 2004)NoStop
[Dusuel and Vidal(2005)]VidalPRB2005
author author S. Dusuel and author J. Vidal, @noop journal journal Phys. Rev. B volume 71, pages 224420 (year 2005)NoStop
[Ma and Wang(2009)]Ma2009PRA
author author J. Ma and author X. Wang, https://doi.org/10.1103/PhysRevA.80.012318 journal
journal Phys. Rev. A volume 80, pages 012318 (year 2009)NoStop
[Holstein and Primakoff(1940)]HP1940PR
author author T. Holstein and author H. Primakoff, https://doi.org/10.1103/PhysRev.58.1098 journal journal Phys. Rev. volume
58, pages 1098 (year 1940)NoStop
[Gerry and Knight(2004)]gerry_knight_2004
author author C. Gerry and author P. Knight, https://doi.org/10.1017/CBO9780511791239 title
Introductory Quantum Optics (publisher Cambridge University
Press, year 2004)NoStop
[app()]approx
@noop note At a large enough θ, tanhθ
can firstly be approximated by (1-2exp(-2θ)). Furthermore, the
approximation (ν^*/μ)^K ≈ 1-2Kexp(-2θ) is valid when
exp(-2θ)≪ 1/(2K-2). Since maxK=J, the minimum requirement for
this approximation to be valid for all Ks is exp(-2θ)≪1/(2J-2).
Specifically, for J=1000, the valid regime for the the exponential decay
formula of spin average(Eq.(13)) is θ>3.8, which coincides with the
results shown in Fig.1(a).Stop
[Robertson(1930)]Robertson1930general
author author H. P. Robertson, @noop journal journal
Phys. Rev volume 35, pages 667
(year 1930)NoStop
[Schrödinger(1930)]schrodinger1930
author author E. Schrödinger, @noop journal journal Phys.-Math. Klasse volume 14, pages 296 (year 1930)NoStop
[Kinani and Daoud(2001)]Kinani_2001GIS
author author A. H. E. Kinani and author M. Daoud, @noop journal journal J. Phys. A: Math. Gen. volume 34, pages 5373 (year 2001)NoStop
[not()]note1
@noop note We note that the phase sensitivity is in
principle determined by the quantum Cramer-Rao bound
Δϕ=ξ_R/√(J)≥1/√(F_Q), where F_Q denotes the quantum
Fisher information. From this inequality, one finds that the spin squeezing
is limited by F_Q. It is evident that the GIS |χ⟩ saturates the
quantum Cramer-Rao bound, resulting in
ξ_R^2=2J/F_Q <cit.>.Stop
[Giovannetti et al.(2004)Giovannetti, Lloyd, and Maccone]Giovannetti2004Science
author author V. Giovannetti, author S. Lloyd, and author L. Maccone, https://doi.org/10.1126/science.1104149 journal journal Science volume 306, pages
1330 (year 2004)NoStop
[Giovannetti et al.(2006)Giovannetti, Lloyd, and Maccone]Giovannetti2006PRL
author author V. Giovannetti, author S. Lloyd, and author L. Maccone, https://doi.org/10.1103/PhysRevLett.96.010401 journal
journal Phys. Rev. Lett. volume 96, pages 010401 (year 2006)NoStop
[Giovannetti et al.(2011)Giovannetti, Lloyd, and Maccone]Giovannetti2011NP
author author V. Giovannetti, author S. Lloyd, and author L. Maccone, https://doi.org/10.1038/nphoton.2011.35 journal journal Nat. Photonics volume 5, pages 222 (year 2011)NoStop
[Tóth and Apellaniz(2014)]Toth2014
author author G. Tóth and author I. Apellaniz, https://doi.org/10.1088/1751-8113/47/42/424006
journal journal J. Phys. A: Math. Theor. volume 47, pages 424006 (year 2014)NoStop
[Zhang and Duan(2014)]Zhang_2014NJP
author author Z. Zhang and author L. M. Duan, @noop journal journal New J.
Phys., volume 16, pages 103037
(year 2014)NoStop
[two()]two_mode
@noop note The same effective Hamiltonian H_A is
derived approximately in a Bose-Einstein condensate trapped in a double-well
potential and the Heisenberg-limit spin squeezing may be achieved under the
two-mode, large J, and mean-field approximations. However, our results on
the spin Bogoliubov Hamiltonian H_B are free of these approximations thus
are applicable to many other systems.Stop
[Płodzie ńń et al.(2022)Płodzie ńń,
Lewenstein, Witkowska, and Chwede ńńczuk]Plodzie2022PRL
author author M. Płodzie ńń, author
M. Lewenstein, author
E. Witkowska, and author
J. Chwede ńńczuk, https://doi.org/10.1103/PhysRevLett.129.250402
journal journal Phys. Rev. Lett. volume 129, pages 250402 (year
2022)NoStop
|
http://arxiv.org/abs/2409.03272v1 | 20240905063001 | OccLLaMA: An Occupancy-Language-Action Generative World Model for Autonomous Driving | [
"Julong Wei",
"Shanshuai Yuan",
"Pengfei Li",
"Qingda Hu",
"Zhongxue Gan",
"Wenchao Ding"
] | cs.CV | [
"cs.CV",
"cs.RO"
] |
Inverse Design of Winding Tuple for Non-Hermitian Topological Edge Modes
Yong Hu
September 9, 2024
========================================================================
§ ABSTRACT
The rise of multi-modal large language models(MLLMs) has spurred their applications in autonomous driving. Recent MLLM-based methods perform action by learning a direct mapping from perception to action, neglecting the dynamics of the world and the relations between action and world dynamics. In contrast, human beings possess world model that enables them to simulate the future states based on 3D internal visual representation and plan actions accordingly. To this end, we propose OccLLaMA, an occupancy-language-action generative world model, which uses semantic occupancy as a general visual representation and unifies vision-language-action(VLA) modalities through an autoregressive model. Specifically, we introduce a novel VQVAE-like scene tokenizer to efficiently discretize and reconstruct semantic occupancy scenes, considering its sparsity and classes imbalance. Then, we build a unified multi-modal vocabulary for vision, language and action. Furthermore, we enhance LLM, specifically LLaMA, to perform the next token/scene prediction on the unified vocabulary to complete multiple tasks in autonomous driving. Extensive experiments demonstrate that OccLLaMA achieves competitive performance across multiple tasks, including 4D occupancy forecasting, motion planning, and visual question answering, showcasing its potential as a foundation model in autonomous driving.
§ INTRODUCTION
Rencent years, we have witnessed a significant breakthrough in multi-modal large language models(MLLMs) capable of integrating various modalities, such as language, image, audio, which has accelerated the development of embodied artificial intelligence (Embodied AI). Nevertheless, a general agent, which can address multiple tasks in real-world has yet to emerge. This is inherently because existing MLLMs perform action by learning a direct mapping from perception to action, neglecting the dynamics of the world and the relations between action and world dynamics. In contrast, human beings possess world model that enables them to simulate the future states based on 3D internal visual representation and plan actions accordingly. Therefore, exploring how to construct the agent's world model is crucial for the advancement of embodied intelligence.
Autonomous driving, as a representative application of Embodied AI, has witnessed extensive research on world models. However, the precise definition of a world model for autonomous driving remains an open question. Current world models for autonomous driving focus on sensor prediction tasks such as video prediction <cit.>, point cloud prediction <cit.> and occupancy prediction <cit.>. Yet they fail to simultaneously achieve forecasting scene evolution, language reasoning, and interaction with the real world. Thus, we propose that a model capable of unifying the modeling of vision, language, and actions(VLA), akin to human abilities, would be a promising candidate for an autonomous driving world model.
However, two challenges are crucial and need to be solved for building a VLA world model. The first is to build a general 3D visual representation that facilitates both understanding and generation, and the second is to develop a multi-modal framework capable of accommodating VLA modalities. Recent years, semantic occupancy (Occ) has gained significant attention as a general 3D visual representation. It can describe fine-grained 3D structure, while also containing high-level semantic information, making it well-suited for aligning space and semantics. Meanwhile, the feasibility of vision generation based on autoregressive language models has been thoroughly validated, with performance comparable to diffusion models, which are specialists in visual generation. These provide valuable insights for addressing the challenges and constructing a VLA world model based on an autoregressive model with Occ visual representation.
Based on the above observations, we propose OccLLaMA, a unified 3D occupancy-language-action generative world model, which unifies VLA-related tasks including but not limited to scene understanding, planning, and 4D occupancy forecasting, as shown in <Ref>. To enable OccLLaMA with the ability to understand and generate vision modality, we choose Occ as a general visual representation and introduce a novel scene tokenizer to construct discrete scene vocabulary effectively, considering sparsity and classes imbalance. Then, by combining scene vocabulary, language vocabulary, and action vocabulary, we construct a unified multi-modal vocabulary for VLA-tasks, which provides a foundation for integrating VLA in one model. Furthermore, we enhance LLM, specifically LLaMA <cit.>, to implement next token/scene prediction on the unified multi-modal vocabulary, building a VLA world model similar to humans.
We summarize our contributions as follows:
* An occupancy-language-action generative world model, OccLLaMA, which uses Occ as visual representation and involves multiple tasks through unified multi-modal vocabulary and enhanced autoregressive model based on LLaMA.
* A novel scene tokenizer that efficiently discretize and reconstruct Occ scenes, considering sparsity and classes imbalance.
* Extensive experiments compared to SOTA methods, which achieves the competitive performance across multiple tasks, including 4D occupancy forecasting, motion planning, and visual question answering.
§ RELATED WORKS
§.§ MLLMs in Autonomous Driving
The advancement of LLMs explore new paradigms in autonomous driving, including scene understanding <cit.>, end-to-end decision-making <cit.>.
LLM-driven decision-making methods have shown potential in addressing interpretability and generalization challenges in learning-based systems by making inferences in text space. For real-world autonomous driving, various techniques have emerged to convey environmental information to models, with research advancing to extend input modalities more effectively.
It includes template-based scene description in natural language <cit.>, vector embedding input combined with language prompts <cit.>, camera-perceived image embedding <cit.>,etc. It remains to be validated whether more modalities mean high performance. For example, point clouds do not show the ability to enhance performance in DriveMLM <cit.>. Moreover, the output modality in past work tends to be relatively homogeneous, which somewhat limits the model's accuracy and stability of decision making and closed-loop feedback capability. Thus there is an opportunity for multimodal large language models(MLLM) and world models(WM) to shake hands.
§.§ World Model in Autonomous Driving
World Models aim to predict future scene based on the action and observations <cit.>.In autonomous driving, world models are often used for data generation and decision making. Various models represents the scene in different spaces, they can be divided into 2D image representation <cit.>, 3D point clouds representation <cit.> and 3D occupancy representation <cit.>.
Visual world models using 2D image representations offer scalability due to sensor flexibility but lack 3D scene comprehension. While 3D point clouds representations address this problem, they lack semantic information.
Some works <cit.> focus on multi-modal representation, but it is difficult to align the features of results generated in different modality. Therefore, integrating the 3D scene representation and semantic understanding is a promising way to model the scene evolution. Unlike the paradigm of <cit.>generating the 3D occupancy without semantics meaning attached and scene representation in camera and point cloud form separately. Refer to the paradigm of Occupancy World <cit.>,representation on 3D occupancy space with semantics is adopted in this work.
§.§ Autoregressive Visual Generation
Autoregressive (AR) Visual Generation refers to models use autoregressive methods to generate images. Early models such as VQVAE <cit.>, VQGAN <cit.> and Dalle <cit.>, convert images into discrete tokens and generate them sequentially faced limitations in output performance, scalability. Then diffusion models <cit.>dominated the field of visual generation due to distinct paradigms.
Recently, the simplicity of the autoregressive model has enabled uniform understanding and generation, effectively scaling up big data, leading to notable success and increased attention.
VAR <cit.> model enables GPT-based autoregressive models to surpass diffusion models in image generation. Llama-Gen <cit.> outperforms diffusion models in conditional image generation, showing that pure autoregressive models can serve as a foundation for image generation without visual signal inductive bias. Integrating AR language models with vision generation remains challenging, particularly in creating unified models for both language and vision tasks.
§ METHOD
§.§ Overview
We propose OccLLaMA, a uniform occupancy-language-action framework. As illustrated in <Ref>, the core components of OccLLaMA include the scene tokenizer (<Ref>) and the occupancy-language-action generative world model (<Ref>). To involve multitask, we introduce a three-stage training scheme (<Ref>) for the training of scene tokenizer, occupancy-language-action pre-training, and instruction tuning.
§.§ Scene Tokenizer
To represent scenes using discrete tokens, a common approach is to employ a VQVAE-like architecture <cit.>. However, approximately 90% of the grids <cit.> in occupancy are filled with air, leading to significant sparsity.
Existing methods that apply dense convolutional operators to the air category are both expensive and inefficient. Additionally, the imbalance between categories further hinders learning efficiency. To address these challenges, we introduce a sparse encoding strategy for the encoder, inspired by point cloud processing techniques. Meanwhile, we decouple the non-occupied category from other semantic categories, allowing for more efficient scene reconstruction.
§.§.§ Encoder
The original scene is represented as x ∈ℝ^H × W × D, where the 3D space is divided into dense H × W × D voxels, and each voxel is assigned a semantic label l ∈ℝ. We then sparsify x into y ∈ P^H × W by discarding the air voxels and representing the semantic occupancy voxels as a 1D pseudo-point cloud set P={p_i}_i=1^N arranged along the BEV direction, where N is the number of non-air voxels within the current pillar. Each point p_i is a vector (d,l) with height d and semantic label l. We then aggregate the pseudo-point cloud features using a pillars embedding <cit.> and employ a swin-transformer block <cit.> to obtain the BEV feature map z = E(y) ∈ℝ^H/r×W/r× c, where r is downsampling rate, and c is the latent feature dimension.
§.§.§ Quantification
To obtain discrete representations, we next transform z into a collection of codebook entries ẑ through vector quantization. The learnable codebook Z = {ẑ_i}^K_i=1 consists of K vectors, each with a dimension of c. The quantization process Q(·) replaces each z_i with its nearest codebook entry ẑ_k in Z, expressed as:
ẑ_i=Q(z_i):=min_ẑ_k∈ Z z_i - ẑ_k _2
§.§.§ Decoder
Due to the loss of height information in the BEV feature map after quantization, the decoder restores dense 3D voxel features by stacking convolution block and up-sampling layer. Specifically, to address classes imbalance, we instantiate lightweight voxel head and class head separately to decode geometric and semantic information of occupancy. Notably, the voxel head provides an occupied mask for the class head, allowing us to supervise the semantics of the occupied voxels only.
§.§.§ Loss
To train this scene tokenizer, we follow OccWorld <cit.> to utilize three loss functions for optimizing, where cross-entropy loss ℒ_c and lovasz-softmax loss ℒ_l for geometry ge and semantics se reconstruction learning, and the embedding loss ℒ_e for codebook learning.
ℒ = λ_1 ℒ_c^ge + λ_2ℒ_l^ge +
λ_3 ℒ_c^se + λ_4ℒ_l^se
+ λ_5ℒ_e
§.§ Generative World Model
§.§.§ Unified Vocabulary
Employing scene tokenizer in <Ref>, an occupancy scene x can be mapped and flattened into a sequence ẑ^1:L∈ℝ^c, where L = H/r×W/r, allowing for joint representation with similar language vocabulary V_t = { v_t^i}_i=1^K_t in original LLMs. Specifically, we first represent scene tokens ẑ^1:L as a sequence of indices s^1:L = {s^i}_i=1^L, where s^i corresponds to the code index number of scene tokens ẑ^1:L. So we can build a scene vocabulary V_s = { v_s^i}_i=1^K_s, which is order-preserving to our scene codebook Z. Since it is non-trivial to output fine-grained numerical results using general LLMs, we divide the coordinates of waypoints into N bins empirically based on the statistics of the trajectory set and map waypoint to the nearest bin, to build an action vocabulary V_a = { v_a^i}_i=1^K_a. Additionally, we add several special functional tokens { v_f^i}_i=1^K_f, such as , , , to denote modality boundaries; to assist in next scene prediction. Thus, we can build a unified occupancy-language-action vocabulary V = { V_s, V_t, V_a, { v_f^i}_i=1^K_f} to formulate diverse tasks in a generate format, where both input and output can be one of the three modalities or a mixture, depending on the specific task to be solved.
§.§.§ Next Token / Scene Prediction
We observe that both language and action are temporal sequences, making the tokens within these sequences naturally suitable for temporal attention with original causal masks and next token prediction mechanisms. However, the tokens within a scene sequence do not inherently follow a temporal order, and the sequence length tend to greater than language and action. If next token prediction is performed line by line within a scene, it fails to capture the spatial relationships and incurs high computational costs. To address these issues, we introduce a next scene prediction while preserving the next token prediction.
As illustrated in <Ref>, we implement spatial attention at the positions corresponding to scene tokens to better capture the spatial relationships within the scene. Correspondingly, we initialize learnable scene queries to predict the entire scene in one forward step, enabling better interaction among tokens within the scene and significantly reducing inference time. In <Ref>, we provide a detailed explanation of the mechanism for executing next token / scene prediction simultaneously.
§.§ Train Stage
Our training scheme includes three stages:
§.§.§ Training of scene tokenizer
We first focus on learning the scene codebook to represent occupancy as discrete tokens, using the objective defined in <Ref>. Once optimized, the scene tokenizer remains unchanged throughout the subsequent stages of the pipeline.
§.§.§ 3D Occupancy-Language-Action pre-training
We focus on aligning occupancy-language-action modalities in this stage. We use world model objective and scene-caption objective for full parameter pre-training, the former supervises the alignment between occupancy and action to learn the evolution of the world, the latter supervises the alignment between occupancy and language to learn the semantic understanding of the 3D scene.
§.§.§ Instruction tuning
In this stage, we fine-tunes the model based on prompt-based instructions for different scene understanding and planning tasks by LoRA <cit.>.
§ EXPERIMENTS
§.§ Experiment Settings
§.§.§ Dataset
NuScenes <cit.> is a widely recognized foundation dataset in autonomous driving. The dataset comprises 700 training videos and 150 validation videos, each spanning 20 seconds with a key frame rate of 2Hz. Occ3D <cit.> is a large-scale dataset for 3D occupancy based on NuScenes, providing a semantic occupancy representation for each frame. NuScenes-QA <cit.> is a multi-modal visual question answering dataset based on Nuscenes. It encompasses five categories of questions: existence, counting, query-object, query-status, and comparison, which further categorized into zero-hop and one-hop by complexity. For aligning occupancy and language modalities, we collecte a large caption dataset based on NuScenes. Specifically, this dataset matches occupancy frames with the positions, classes, states, and future trajectories of objects appearing on them.
§.§.§ Implementation Details
For most comparisons, we set language model backbone as LLaMA-3.1-8b and the scene tokenizer parameters as 50×256×2048. For VQA comparison, we set language model backbone as LLaMA-2-7b and scene tokenizer resolution as 25×25 for fairness. We employ the AdamW optimizer for all the training. Scene tokenizer is trained with learning rate of 10^-4, batch size of 4, λ_1=λ_3 = 10, λ_2 = λ_4 = 5, and λ_5 = 5, while Generative Model is trained with learning rate of 10^-4 and batch size of 1 in pre-training strage, 5 × 10^-5 and 4 in instruction tuning stage. The Scene tokenizer undergoes 100 epoch on 8 RTX 4090 GPUs, while the Generative Model undergoes 10 epoch in pre-training stage and 5 epoch in instruction tuning stage on 8 V100 GPUs.
§.§ Results and Analysis
§.§.§ 4D Occupancy Forecasting
The task aims to forecast the future 3D occupancy scene given a few historical occupancy inputs. Specifically, we follow existing works <cit.> and used a 2-second historical frames to forecast the subsequent 3 seconds and use mIoU and IoU as the main evaluation metrics. As illustrated in <Ref>, we compare OccLLaMA with the state-of-the-art approach, OccWorld, in two settings: using ground-truth 3D occupancy (-O), using predicted results from FBOCC <cit.> based on camera data (-F). Firstly, we observe that our scene tokenizer demonstrates superior scene reconstruction capabilities. Additionally, OccLLaMA achieves a competitive forecasting result within 1s, and significantly outperformes OccWorld over longer time, highlighting its enhanced long-term prediction capabilities. Furthermore, OccLLaMA-F can be regarded as an end-to-end pipeline as it takes cameras as input. Despite the complexity of the task, OccLLaMA consistently exhibits strong predictive performance. We present visualizations consistent with the above conclusions in <Ref>.
§.§.§ Motion Planning
As illustrated in <Ref>, We compare the motion planning capabilities of OccLLaMA with several strong baselines that utilize various inputs and supervisions. We also compare our model with OccWorld under different settings as those in 4D occupancy forecasting task. We observe that UniAD <cit.> delivers the best performance, while the supervision annotations, limiting its scalability to large-scale datasets. As an alternative, OccLLaMA achieves competitive performance relying solely on 3D semantic occupancy, demonstrating its potential to scale as a foundational model in autonomous driving. Compared to methods using occupancy as input, OccLLaMA significantly outperforms OccNet <cit.>, further highlighting the superiority of the Autoregression. Additionally, surpassing the autoregressive state-of-the-art method OccWorld demonstrates the effectiveness of our pipeline. Moreover, the non-trivial performance achieved by integrating existing methods showcases the generalizability of our approach. Notably, outputting trajectories without alternating scene predictions results in performance drop, suggests that world model paradigm holds greater potential.
§.§.§ Visual Question Answering
To the best of our knowledge, we are the first MLLM to utilize Occupancy data with textual instructions as input and implement a series of 3D tasks in autonomous driving. We choose LiDAR-LLM <cit.>, the state-of-the-art on NuScenes-QA benchmark that integrates LiDAR into LLMs, as our primary baseline for comparison. Additionally, we evaluated a robust 2D LLM on the NuScenes-QA benchmark using depth images and raw images as input separately. We assess model performance using Top-1 accuracy metric and conduct separate evaluations for different question types. To ensure fairness, we implemente our pipeline under LLaMA2-7b, the same base model as LiDAR-LLM <cit.> and LLaVA <cit.>.
As illustrated in <Ref>, we observe OccLLaMA delivers the best performance overall. Compared to LiDAR-LLM, OccLLaMA can capture semantic information in 3D space better, which is essential for object-related questions. Additionally, OccLLaMA incorporates spatial information as input and aligns semantic and spatial data naturally, which is beneficial for questions involving spatial relationships.
§.§ Ablation Study
§.§.§ Scene Tokenizer Parameters
<Ref> compares the impact of different hyperparameters on the reconstruction performance of the Scene Tokenizer, including latent space resolution, feature dimension, and codebook size. A larger codebook leads to overfitting and inefficient codebook utilization. A smaller codebooks and feature dimensions fail to effectively model the scene distributions. The resolution is positively correlated with reconstruction ability and has the most significant impact. However, larger resolution result in a greater token number to reconstruct a scene, thereby increasing the burden on forecasting.
§.§.§ Generative Model Components
We compares the impact of different components of the generative model on forecasting and planning performance. As illustrated in <Ref>, w/o spatial attention means that the tokens in one scene maintain their original causal attention based on the flattened sequence order. w/o action tokenization means that waypoints are formed by concatenating tokens from original language vocabulary. We observe that using action-specific tokens, rather than relying on language vocabulary, results in performance gains on forecasting and planning. This improvement can be attributed to that action-specific tokens preserve the physical priors of waypoints while avoiding the inductive bias in language vocabulary. Additionally, we find that employing spatial attention to model spatial dependencies within the scene is essential for forecasting. However, it leads to a slight decrease in planning performance, which we attribute to spatial attention locally disrupting the global causal attention.
§.§.§ Benefits of Pretraining
As shown in <Ref>, we compare the impact of different training settings on QA performance: instruction fine-tuning from pretraining (<Ref>) versus instruction fine-tuning from scratch. We observe that pretraining for modality alignment lead to an overall improvement on VQA. This indicates that when OccLLaMA gains an understanding of basic 3D scenes and world dynamics, it can better accomplish high-level QA tasks.
§ CONCLUSION
In this paper, we propose OccLLaMA, a 3D occupancy-language-action generative world model in autonomous driving for multitask. We introduce a novel scene tokenizer for discretization and reconstruction of occupancy scene. Furthermore, we build a unified multi-modal vocabulary that involve occupancy, language, and action modalities. Base on this vocabulary, we adapt LLM to perform the next token / scene prediction to complete multitask. Through extensive experiments on 4D occupancy forecasting, motion planning, and VQA, we demonstrate the multitask effectiveness of OccLLaMA. In the future, we will increase data diversity to further enhance the capabilities of OccLLaMA. We will also explore model quantization and distillation to address the inference delay caused by the large number of parameters.
|
http://arxiv.org/abs/2409.02679v1 | 20240904130734 | On the particle content of MHS theory | [
"Maro Cvitan",
"Predrag Dominis Prester",
"Stefano Giaccari",
"Mateo Paulišić",
"Ivan Vuković"
] | hep-th | [
"hep-th",
"gr-qc"
] |
Preprint number: ZTF-EP-24-08
Department of Physics, Faculty of Science, University of Zagreb, Zagreb, Croatia.
[email protected]
University of Rijeka, Faculty of Mathematics, Rijeka, Croatia
[email protected]
Istituto Nazionale di Ricerca Metrologica, Torino, Italy
[email protected]
University of Rijeka, Faculty of Physics, Rijeka, Croatia
[email protected]
Vienna, Austria
[email protected]
§ ABSTRACT
The Moyal-Higher-Spin (MHS) formalism, involving fields dependent on spacetime and auxiliary coordinates, is an approach to studying higher spin (HS)-like models. To determine the particle content of the MHS model of the Yang-Mills type, we calculate the quartic Casimir operator for on-shell MHS fields, finding it generally non-vanishing, indicative of infinite/continuous-spin degrees of freedom. We propose an on-shell basis for these infinite/continuous-spin states. Additionally, we analyse the content of a massive MHS model.
On the particle content of MHS theory
Ivan Vuković
September 9, 2024
=====================================
§ INTRODUCTION
There are both purely theoretical and phenomenological reasons for the construction of consistent higher spin (spin s>2) quantum field theories (QFTs) in Minkowski background, which remains a still elusive goal. Such theories are expected to contain an infinite tower of higher spin fields/particles, a property which could ensure a better (softer) UV behaviour and, in this way, avoid some of the problems present in standard QFTs (e.g., Landau pole). If the HS fields are described by totally symmetric Lorentz tensor spacetime fields ϕ_μ_1⋯μ_s(x), the infinite tower can be packed into a field on a 2d-dimensional domain by using an auxiliary space with coordinates u = u_μ,μ=0,…,d-1 transforming as a Lorentz vector, in the following way
ϕ(x,u) = ∑_s=0^∞ϕ^μ_1⋯μ_s(x) u_μ_1⋯ u_μ_s
We refer to fields defined in the 2d-dimensional space spanned by x and u as master fields. Usually, master fields are used just as a formal construct to write equations for free HS spacetime fields in a compact way, in particular without any requirements of convergence of the infinite series in (<ref>). We observe here that requiring convergence in (<ref>) automatically implies constraints on spacetime fields which means that they cannot be independent. However, recently in the literature, there appeared constructions in which master fields are used as fundamental objects in the off-shell descriptions <cit.>. Here we shall be interested in the approaches in which master fields are required to be square-integrable in the auxiliary space
∫ d^d u ϕ(x,u)^† ϕ(x,u) < ∞ ,
an example being given by the Moyal-Higher-Spin (MHS) gauge theories <cit.>. In this case, a Taylor expansion (<ref>) is not much of use as the spacetime fields defined in this way cannot be treated as independent degrees of freedom. Consequently, one cannot read off the particle content of the theory by plugging (<ref>) into the linearised equations of motion, so one has to use different methods to understand how Lorentz and Poincaré groups are represented.
These representations can be studied in general for any dependence on u, as well as in a particular basis in the auxiliary space u.
The Poincaré transformations, in general, act on master fields in the following way
ϕ'_r(x,u) = ∑_s 𝒟(Λ)_r^s ϕ_s(Λ^-1x - ζ , Λ^-1u) ,
where r and s represent a finite number of Lorentz indices and 𝒟(Λ) is some standard finite-dimensional IRREP of the Lorentz group (which for a scalar master field is trivial).
It was noted in <cit.> that one can use an infinite-dimensional representation of the Lorentz group induced by the transformation of the auxiliary coordinate u in (<ref>). It was further noted in <cit.> that the restriction of master field dependence on u to the linear vector space L_2(ℝ^d) (square-integrable functions over the auxiliary space) is both infinite-dimensional and unitary.
The explicit expression of the matrix representation, written in the basis of d-dimensional Hermite functions, was presented in <cit.>.
What remained to be analysed is the perturbative spectrum (i.e. particle content of a free theory) of a master field that is square integrable in the auxiliary space, in terms of IRREP of the Poincaré group by the Wigner construction, and this is what we do in the present paper. To avoid unnecessary complications, we shall demonstrate the construction on the example of the master field gauge potential in the YM theory, both for the massless and massive cases.
We begin by reviewing the MHS formalism and then display several approaches to analysing the theory's spacetime content. The first one, a Taylor expansion in the auxiliary space, is conventional and enables a direct comparison to the standard higher spin models. However, as mentioned, it is not consistent with the requirement of the square integrability of the master fields.
The second approach is an expansion in terms of the orthonormal basis of functions (for discrete bases, orthonormality is defined using Kroneckers and for continuous bases using Dirac delta functions) in the auxiliary space <cit.>.
We employ modes given by products of Hermite functions (momentum independent). Such modes furnish an infinite-dimensional representation of the Lorentz group (<cit.>, <cit.>).
We also employ modes that are momentum-dependent and are solutions to the differential equations posed by the little group generators.
By analysing the polarisation structure of the solutions of the linear equations of motion of the MHSYM model (linearised around Minkowski vacuum) expanded in terms of Hermite functions, we learn about supported helicities and obtain an indication that, in general, we can have a non-vanishing value of the quartic Casimir. In addition, by analysing the polarisation structure, now in terms of functions which are solutions to the differential equations posed by the little group generators, we find a complete on-shell (momentum-dependent) basis which enables us to read off the particle content of the solutions. In the appendix <ref>, we analyse the particle content of a massive MHS master field, working directly on the space of one-particle states.
§ MASTER FIELD YANG-MILLS THEORIES
As a working example, let us use the generalisation of Yang-Mills theories defined on the flat Minkowski background and write here just the basic aspects and equations important for the context of the present paper. An explicit example of such a class of theories is the MHS gauge theory developed in <cit.>. While the theory can be defined for an arbitrary number of spacetime dimensions d, we shall focus mostly on the case d=4. The basic object is the gauge potential master field h_a(x,u), where x = {x^μ, μ=0,… d-1} are spacetime coordinates, u = {u_μ, μ=0,… d-1} are (dimensionless) coordinates in the auxiliary space transforming as a Lorentz (covariant) vector, and a = {0,…,d-1} is a frame index. Infinitesimal gauge transformations are given by
δ h_a(x,u) = ∂_a ε(x,u) + O(h) ,
where the gauge parameter ε(x,u) is a generic (infinitesimal) function satisfying some appropriate boundary conditions. Note that we assume here the simplest case of the Maxwell-like theories based on the U(1) internal gauge symmetry. The formalism can be naturally extended to non-commutative internal gauge groups.
However, the structure of the terms linear in the gauge potential depends on the theory in question and is not important for this paper. In the example of the MHS theory, YM algebraic structure is defined by the Moyal product, which introduces a non-commutative structure between the spacetime and the auxiliary space. This structure guarantees not only good classical behaviour but also enables one to formally use standard quantisation methods.
Generalising the Yang-Mills procedure, one defines the master field strength
F_ab(x,u) = ∂^x_a h_b(x,u) - ∂^x_b h_a(x,u) + O(h^2) ,
and the YM action
S[h,ψ] = S_ym[h] + S_matt[h,ψ] ,
where ψ denotes (minimally coupled) matter in the form of master fields and/or ordinary spacetime fields, and
S_ym = - 1/4∫ d^dx d^du F^ab(x,u) F_ab(x,u) .
The equations of motion for the gauge master field are
_x h_a - ∂^x_a ∂^x_b h^b + m_h^2 h_a + O(h^2)
= δ S'_matt/δ h^a ,
where S'_matt is part of S_matt that does not include the mass term, possibly generated by the Higgs mechanism. The energy in the linear approximation is given by
E ≈1/2∫ d𝐱∫ d^du
( ∑_j F_0j(x,u)^2 + ∑_j<k F_jk(x,u)^2 + m_h^2 ( h_0(x,u)^2+ ∑_i h_i(x,u)^2) ) + U_matt ,
where j,k ∈1,…,d-1. From (<ref>) and (<ref>), we can conclude that the theory is classically consistent, at least in the perturbative domain: the vacuum h_a = 0 is stable, and the energy is positive-definite and finite.
Furthermore, there are no problems with ghosts in perturbative expansions. To understand this,
let us first rewrite the MHS gauge field transformation under Poincaré transformations in the following way
h'_a(x,u) = Λ_a^b ( D(Λ) h_b )(Λ^-1x - ζ, u) ,
( D(Λ) h_b )(x, u) = h_b(x, Λ^-1u) .
D(Λ) defines an infinite dimensional linear representation of the Lorentz group on L_2(ℝ^d), the vector space of square-integrable functions over the auxiliary space. Moreover, this representation is unitary with respect to the standard inner product on L_2(ℝ^d)
⟨Ψ | Φ⟩ = ∫ d^d u Ψ(u)^† Φ(u) .
The proof is
∫ d^du (D(Λ)Ψ(u))^† D(Λ)Φ(u) = ∫ d^d u Ψ^†(uΛ)Φ(uΛ) = ∫ d^du Ψ^†(u)Φ(u) ,
where we used d^d(uΛ) = d^d u.
As Φ and Ψ are arbitrary elements of L_2(ℝ^d), it follows from (<ref>) that
D(Λ)^† = D(Λ)^-1 .
The unitarity of D(Λ) guarantees that the master gauge symmetry is large enough to deal with all physically unacceptable modes — i.e. (<ref>) is sufficient to remove them. In fact, it was shown <cit.> that the theory can be formally quantised by using the standard methods of YM theory.
The next task is to investigate the spectrum (particle content) of such theories. In what follows, we shall assume that the master fields, as functions of auxiliary coordinates u, are essentially restricted only by the condition of square integrability.
§ SPACETIME FIELDS
§.§ Taylor expansion
In general, when dealing with higher spin fields, it is useful to pack a complete tower of higher spin fields into a single structure by using an auxiliary Lorentz vector as a bookkeeping device (see e.g. <cit.>).
By reversing this logic, it may appear that the master potential should be understood, from the purely spacetime viewpoint, as an infinite collection of HS fields obtained from
h_a(x,u) = ∑_n=0^∞ h_a^(n)μ_1⋯μ_n(x) u_μ_1… u_μ_n ,
where we use a Latin index for the master field and Greek indices for variables of expansion.
The coefficients in the expansion are spacetime fields that are Lorentz tensors of rank n+1, symmetric in their n (Greek) indices, and which by (<ref>) in the massless case satisfy Maxwell-type equations of motion
h_a^(n)μ_1⋯μ_n - ∂_a ∂^b h_b^(n)μ_1⋯μ_n + O(h^2) = 0 ,
while from (<ref>), we can deduce that the gauge transformations, obtained from expanding the gauge parameter as in (<ref>), are of the form
δ_ε h_a^(n)μ_1⋯μ_n(x) = ∂_a ε^μ_1⋯μ_n(x) + O(h) .
While there is a priori nothing wrong with the manifestly Lorentz covariant power expansion (<ref>), it cannot by itself be used for the purpose of uncovering the spectrum of physical excitations (particle spectrum) of the theory. This is because when substituting (<ref>) into the action (<ref>) or the energy (<ref>) and organising terms by the order of u_μ, we encounter divergent integrations over the auxiliary space u of the form ∫ d^du u_μ_1… u_μ_n at each order n. Therefore, the requirement of square integrability in the auxiliary space forces us to abandon (<ref>) as a useful means.
§.§ Orthogonal functions expansion
An alternative to the Taylor expansion above comes from relaxing the notion of how Lorentz covariance is to be achieved and giving priority to the fact that integrals over the auxiliary space should be finite. We can then use a complete orthonormal set of functions in the auxiliary space {f_r(u)}, indexed by some formal parameter r to expand the master potential as
h_a(x,u) = ∑_r h_a^(r)(x) f_r(u) ,
where
∫ d^d u f_r(u) f_s(u) = δ_rs .
Using such an expansion, one arrives at the purely spacetime off-shell description with the free (quadratic) part of the Lagrangian given by
S_0[h] = - 1/4 g_ym^2∑_r,s∫ d^dx (∂_a h_b^(r) - ∂_b h_a^(r))
η^acη^bdδ_rs(∂_c h_d^(s) - ∂_d h_c^(s)) .
On the linear level, the gauge symmetry acts on spacetime fields
h_a^(r)(x) as
δ_εh_a^(r)(x) = ∂_a ε^(r)(x) + O(h) ,
where ε^(r)(x) are obtained from the master gauge parameter ε(x,u) by expanding as in (<ref>).
One comes to the description in terms of an (infinite) set of Maxwell-like fields. This form shows explicitly the absence of physically non-acceptable modes (ghosts and/or negative energy modes) in the spectrum of the free theory.
The appearance of the Kronecker in (<ref>), however, leads to another effect. The positive definite product of two basis functions in (<ref>) might seem in disagreement with the condition of Lorentz covariance since intuition usually leads us to expect the Minkowski metric on the right-hand side of equations such as (<ref>) if Lorentz covariance is to be achieved. However, as we have shown in the previous section, the Lorentz covariance is still present in the form of a unitary infinite-dimensional representation of the Lorentz group acting on the index r.
One particularly convenient choice for the orthonormal basis of functions are multi-dimensional Hermite functions that we have used in <cit.>. We repeat the definition here with modifications due to the fact that the auxiliary space variables are u_a, not u^a.
H_n(u) are the Hermite polynomials
H_n(u)=(-1)^n e^u^2d^n/d u^n e^-u^2 ,
where the index n can attain arbitrary non-negative integer values. Hermite functions are defined as
f_n(u) = 1/√(2^n n! √(π))e^-u^2/2 H_n(u) .
The multi-dimensional Hermite function that we will use for the expansion in the auxiliary space is defined as
f_n_0⋯ n_d-1(u) = f_n_0(u_0)⋯ f_n_d-1(u_d-1) .
They satisfy the orthonormality condition
∫ du_0⋯ du_d-1 f_n_0⋯ n_d-1(u)f_m_0⋯ m_d-1(u) = δ_n_0^m_0⋯δ_n_d-1^m_d-1 .
The MHS potential h_a(x,u) ≡ h_a(x^b,u_c) is now expanded as
h_a(x,u)=∑_{n}=0^∞ h_a^n_0⋯ n_d-1(x)f_n_0⋯ n_d-1(u) .
Following the transformation property
h^'_a(x^',u^') = Λ_a^bh_b(x,u) ,
we can deduce the rules for Lorentz transformations of the component spacetime fields h_a^n_0⋯ n_d-1(x)
h_a^'(x,u) = Λ_a^bh_b(Λ^-1x,uΛ)
and we can expand both sides of the equation in the Hermite basis
∑_{n}=0^∞ h_a^' n_0⋯ n_d-1(x)f_n_0⋯ n_d-1(u) = Λ_a^b∑_{m}=0^∞ h_b^m_0⋯ m_d-1(Λ^-1x)f_m_0⋯ m_d-1(u Λ) .
Due to (<ref>) we can multiply both sides with f_r_0⋯ r_d(u), integrate over the auxiliary space, and conclude
h_a^' r_0⋯ r_d-1(x)= Λ_a^b∑_{m}=0^∞ L^r_0⋯ r_d-1_m_0⋯ m_d-1(Λ) h_b^m_0 ⋯ m_d-1(Λ^-1x) ,
where
L^r_0⋯ r_d-1_m_0⋯ m_d-1(Λ) = ∫ du_0⋯ du_d-1 f_r_0⋯ r_d-1(u)f_m_0⋯ m_d-1(uΛ)
is a representation matrix of the Lorentz group in the space of Hermite functions.
We have explicitly constructed the representation matrices in <cit.>.
Since here, the variables of integration are auxiliary space coordinates with lower Lorentz indices, while in <cit.> we worked with variables with upper Lorentz indices,
(<ref>) differs slightly from the formula (3.11) of <cit.>. We now adapt the representation matrices to the current situation. For simplicity, we restrict to two dimensions. From <cit.> we have
D^m_0m_1_n_0n_1(Λ)=∫ du^0du^1 f_m_0(u^0)f_m_1(u^1)f_n_0((Λ^-1u)^0)f_n_1((Λ^-1u)^1) ,
while explicitly in (<ref>) we have
L^m_0m_1_n_0n_1(Λ)=∫ du_0du_1 f_m_0(u_0)f_m_1(u_1)f_n_0((uΛ)_0)f_n_1((uΛ)_1) .
In the mostly plus signature (η^00=-1, η^ii=1) that we are using, we can re-express
u_0 = - u^0, u_1 = u^1, (uΛ)_0 = - (uΛ)^0, (uΛ)_1 = (uΛ)^1
and further we realise
(uΛ)_μ = u_νΛ^ν_μ, (uΛ)^μ=u_νΛ^ν^μ =u^νΛ_ν^μ = (Λ^-1)^μ_ν u^ν = (Λ^-1u)^μ .
With the property of Hermite functions f_n(-u) = (-1)^nf_n(u), we can finally relate
L^m_0m_1_n_0n_1(Λ)=(-1)^n_0+m_0D^m_0m_1_n_0n_1(Λ) .
With these conventions, the prefactor (-1)^n_0+m_0 does not depend on the number of spacetime dimensions. The representation matrices D^m_0m_1_n_0n_1(Λ) can be found in <cit.>.
For further convenience, we explicitly write down the generators of the Lorentz group in d=4 in the infinite-dimensional representation over Hermite functions, adapted to the purposes of this paper (here we also make the operators Hermitean so the rotation generators J_i differ from those in <cit.> by a global factor of -i, while the boost generators K_i differ by a global factor of i).
K_1_n_0n_1n_2n_3^m_0m_1m_2m_3 = iδ_-n_0 + n_1+n_2+n_3^-m_0+m_1+m_2+m_3δ_n_2^m_2δ_n_3^m_3(δ^m_1_n_1+1√((n_0+1)(n_1+1)) - δ^m_1_n_1-1√(n_1n_0))
K_2_n_0n_1n_2n_3^m_0m_1m_2m_3 = iδ_-n_0 + n_1+n_2+n_3^-m_0+m_1+m_2+m_3δ_n_1^m_1δ_n_3^m_3(δ^m_2_n_2+1√((n_0+1)(n_2+1)) - δ^m_2_n_2-1√(n_2n_0))
K_3_n_0n_1n_2n_3^m_0m_1m_2m_3 = iδ_-n_0 + n_1+n_2+n_3^-m_0+m_1+m_2+m_3δ_n_1^m_1δ_n_2^m_2(δ^m_3_n_3+1√((n_0+1)(n_3+1)) - δ^m_3_n_3-1√(n_3n_0))
J_1^m_0m_1m_2m_3_n_0n_1n_2n_3 = iδ^-m_0+m_1+m_2+m_3_-n_0+n_1+n_2+n_3δ^m_1_n_1δ^m_0_n_0(δ^m_2_n_2+1√((n_2+1)n_3)-δ^m_2_n_2-1√(n_2(n_3+1)))
J_2^m_0m_1m_2m_3_n_0n_1n_2n_3 = iδ^-m_0+m_1+m_2+m_3_-n_0+n_1+n_2+n_3δ^m_2_n_2δ^m_0_n_0(δ^m_3_n_3+1√((n_3+1)n_1)-δ^m_3_n_3-1√(n_3(n_1+1)))
J_3^m_0m_1m_2m_3_n_0n_1n_2n_3 = iδ^-m_0+m_1+m_2+m_3_-n_0+n_1+n_2+n_3δ^m_3_n_3δ^m_0_n_0(δ^m_1_n_1+1√((n_1+1)n_2)-δ^m_1_n_1-1√(n_1(n_2+1)))
§.§ Linear solutions and helicity
In this subsection, we focus on the particular number of dimensions, i.e. d=4, and find the helicities of the plane wave solutions for the massless MHSYM model. We can use the expansion (<ref>) and insert it into linearised EoM obtained from (<ref>). The component fields in the expansion satisfy
h_a^n_0n_1n_2n_3(x) - ∂_a ∂^b h_b^n_0n_1n_2n_3(x)= 0
and they enjoy a linearised gauge symmetry of the form
δ_ε h_a^n_0n_1n_2n_3(x) = ∂_a ε^n_0n_1n_2n_3(x) .
To find out about the helicity of the field, we can write down a plane wave solution to the EoM (<ref>), use the freedom available through (<ref>) to fix the gauge and choose a direction of propagation (conventionally, we choose the z-direction). Such a solution is given by
h^a n_0n_1n_2n_3_±(x) = ϵ_(±)^a p^n_0n_1n_2n_3e^ikx ,
where k^2=0, meaning that the field is massless, and where
ϵ_(±)^a = 1/√(2)[ 0 1 ± i 0 ],
and p^n_0n_1n_2n_3 is an a priori unconstrained polarisation factor in the infinite-dimensional unitary representation of the Lorentz group.
The helicity of a plane wave can be calculated as the eigenvalue of the rotation generator around the propagation axis. As we have followed the convention to choose the z-axis as the axis of propagation, we want to find the eigenvalue of the rotation operator J_3. When acting on (<ref>), which is in a mixed representation, each generator will have two parts; one belonging to the finite-dimensional representation (indices a,b), and one belonging to the infinite-dimensional representation (indices m_0,...,m_3 and n_0,...,n_3 in case of the Hermite expansion in d=4), i.e.
(J_3)^m_0m_1m_2m_3_n_0n_1n_2n_3^a_b = (J_3)^m_0m_1m_2m_3_n_0n_1n_2n_3δ^a_b + δ^m_0m_1m_2m_3_n_0n_1n_2n_3(J_3)^a_b ,
where (J_3)^a_b is in the fundamental (vector) representation of the Lorentz group, given explicitly by
J_3=
[ 0 0 0 0; 0 0 -i 0; 0 i 0 0; 0 0 0 0; ] .
We have found the eigenvectors of (J_3)^m_0m_1m_2m_3_n_0n_1n_2n_3 in <cit.>. They are given by
p^n_0n_1n_2n_3 = d^n_0n_3C^n_1n_2_(r,λ) ,
with the coefficients d^n_0n_3 arbitrary and C^n_1n_2_(r,λ) given by
C_(r,λ)^n_1, n_2
=δ^n_1+n_2_r i^k-n_1-1 d^r/2_k-r/2,n_1-r/2( π/2) ,
where d^l_m,n(β) are the Wigner d-functions, λ = (2k - r) with r=n_1+n_2 and k having possible values k =0,1,...,r .
These coefficients, when contracted with the basis vectors, produce the expected azimuthal dependence in the auxiliary space, C_(r,λ)^n_1, n_2 f_n_1(u_x)f_n_2(u_y) ∼ e^i λ u_ϕ, where summation convention for n_1 and n_2 is assumed.
It is then straightforward to see that
(J_3)^m_0m_1m_2m_3_n_0n_1n_2n_3^a_b ϵ_(±)^b p^n_0n_1n_2n_3 = (± 1 + λ) ϵ_(±)^a p^m_0m_1m_2m_3 ,
where the polarisation coefficients of definite (r,λ) are
p^n_0n_1n_2n_3 = d^n_0n_3C^n_1n_2_(r,λ) .
All together, this means that the plane waves (<ref>) with p^n_0n_1n_2n_3 given by (<ref>) for some given r=0,1,… and λ=-r, -r+2,,…,r-2,r can carry helicity (<ref>) in range -r-1,-r+1,…,r-1,r+1.
A single value of helicity can appear in infinitely many polarisation factors (<ref>), e.g. helicity 3 can appear for each even value of r ≥ 2. We can also see that helicity is not a Lorentz invariant quantity for solutions such as (<ref>). For example, if we boost the solution in the x direction, n_0 and n_1 indices will get mixed, and the final expression will no longer have a well-defined helicity, i.e., it will be a superposition of terms with various values of λ. This is a characteristic of continuous spin particles, as emphasised in <cit.>.
Another approach to learn about supported helicities already noted in <cit.> involves using spherical harmonics in the auxiliary space, which are diagonal in the rotation generator around the axis of propagation. The axis of propagation 𝐤̂ in spacetime defines the preferred axis 𝐮̂_𝐤̂≡𝐤̂ in the auxiliary space, as well as corresponding spherical coordinates u_r, u_𝐤̂,θ and u_𝐤̂,ϕ (where 𝐮̂_𝐤̂ is tangent to the direction where u_𝐤̂,θ=0).
g_r_0 n l m(u,𝐤̂) = f_n_0(u_0) F_n(u_r) Y_l^m(u_𝐤̂,θ,u_𝐤̂,ϕ),
where F_n are Laguerre functions, Y_l^m are spherical harmonics, and n_0 = 0, 1, 2, …, n = 0, 1, 2, …,
l = 0, 1, 2, …,
, m = -l, -l+1, …, l. The plane wave solutions for the MHS potential can then be expanded as
_σ r_0 n l m(𝐤) e^i k · x g_r_0 n l m(u,𝐤̂) , 𝐤·_σ r_0 n l m(𝐤) = 0 , k^2 = 0,
with σ = ±1. Helicity is σ + m, showing an infinite number of degrees of freedom for each helicity value.
§ WIGNER'S CLASSIFICATION
The field we worked with above is massless, and to classify the possible excitations further, we need to find the value of the quartic Casimir operator of the Poincaré group. Here we would like to review the basics of Wigner's method for classifying elementary particles and highlight the relationship to the plane wave solutions of the linear equations of motion. This exposition closely follows the first volume of Weinberg's Quantum Theory of Fields <cit.> while additional details can be found in <cit.>. Wigner's classification of elementary particles <cit.> is a classification of a spacetime isometry group (in our case, the Poincaré group) represented on the space of one particle states.
In d=4, the Poincaré group has a quadratic and a quartic Casimir operator
C_2 = - P^μ P_μ=-P^2, C_4 = W^μ W_μ = W^2 ,
where P^μ are the translation generators, the Pauli-Lubanski vector W^μ is defined as
W_ρ = 1/2ϵ_μνρκM^μνP^κ
and the Lorentz generators are denoted by M^μν.
It is convenient to label the single-particle states with the eigenvalues of the Casimir operator. Further, since momentum operators form an abelian subgroup, we work with their eigenvectors and label them as
|p^2, w^2, p^μ,σ⟩
such that
P^2|p^2, w^2, p^μ,σ⟩ = p^2|p^2, w^2, p^μ,σ⟩, W^2 |p^2, w^2, p^μ,σ⟩ = w^2 |p^2, w^2, p^μ,σ⟩
and
P^μ|p^2, w^2, p^μ,σ⟩ = p^μ|p^2, w^2, p^μ,σ⟩
with σ labelling other degrees of freedom which are to be determined.
While translations act on the basis vectors as
U(Δ x)|p^2, w^2, p^μ,σ⟩ = e^-ip·Δ x|p^2, w^2, p^μ,σ⟩ ,
it can be shown that homogeneous Lorentz transformations act as
U(Λ)|p^2, w^2, p^μ,σ⟩ = 𝒩∑_σ'𝒟_σ'σ(W(Λ,p))|p^2, w^2, Λ^μ_ν p^ν,σ'⟩ ,
where 𝒩=𝒩(p^0) is a normalisation factor and W(Λ,p) = S^-1(Λ p)Λ S(p) is an element of the little group for a particular standard momentum with S(p) defined to satisfy p^μ = S^μ_ν k^ν. Matrices 𝒟_σ'σ(W) furnish an irreducible representation of the little group, and their construction is sufficient to properly characterise the one-particle state. Within a choice of standard momentum, the quartic Casimir of the Poincaré group is equal to the Casimir operator of the little group. We will be especially interested in the massless case, so we will focus in more detail on the group ISO(2).
If we choose the momentum to be standard p^μ = k^μ = (ω, 0,0,ω), we can explicitly find the components of the Pauli-Lubanski vector, which are the generators of the little group for the case of massless particles
W^μ = ω (J_3, J_1-K_2, J_2+K_1,J_3) .
Where J_i are generators of rotations and K_i generators of boosts. It is convenient to name the generators
A =ω( J_1-K_2), B = ω(J_2+K_1) .
It is easy to check that A,B and J_3 span the Lie Algebra 𝔦𝔰𝔬(2)
[A, B] = 0, [J_3, A] = i B, [J_3, B] = -iA .
The quartic Casimir in this choice of standard momentum is then given by
W_μ W^μ≡ W^2 = ω^2 (J_1-K_2)^2 + ω^2(J_2+K_1)^2
= A^2 + B^2 = w^2 .
The faithful irreducible unitary representations of the little group ISO(2), which have a non-vanishing value of the Casimir operator W^2, are necessarily infinite dimensional <cit.>. If written in a basis diagonal in the rotation operator around the standard momentum, it can be seen that each irreducible representation contains an infinite tower of helicity states mixing under Lorentz transformations. For that reason, such representations are usually named “infinite-spin”. A different basis choice is possible, which motivates a different name — “continuous spins”. This class of representations was originally considered by Wigner to be unsuitable for physical use since the infinite tower of helicities would have to correspond to an infinite heat capacity. However, in recent years, there has been a revived interest in this class of particles and in analysing their kinematic and dynamical aspects (<cit.>).
There is also a possibility of a non-faithful representation of the little group ISO(2) where the operators A,B act trivially. In this case, W^2 gives a vanishing value, and the little group becomes isomorphic to SO(2). The representations are one-dimensional, with the only non-trivial operator being the rotation around the standard momentum. The eigenvectors of this rotation are the ordinary helicity states (due to eigenvalues of A and B being zero, helicity is now Lorentz invariant) describing particles corresponding to massless fields of a fixed spin such as the Maxwell field, linear Einstein gravity, higher spin fields of the Fronsdal type, etc.
There is an important relationship between the matrices 𝒟_σ'σ(W), which, as we have seen, act on the one-particle states, and the Lorentz transformation matrices we use to express quantum field components in different inertial frames. From the creation and annihilation operators, we can build a quantum field
h^r(x) = ∑_σ∫d^3 p/(2π)^3/2√(2ω_p)(u^r(p,σ)a(p,σ)e^-ipx + v^r(p,σ)a^†(p,σ)e^ipx) ,
where r stands for any set of Lorentz indices (e.g. for the MHS model, this includes both a and μ indices in case of the Taylor expansion and a and n indices in case of the Hermite expansion).
Under a Lorentz transformation, it transforms as
U(Λ)h^r(x)U^-1(Λ) = ∑_s D(Λ^-1)^rs h^s(Λ x) ,
where D(Λ) is a representation of the Lorentz group (finite or infinite-dimensional) on the space of fields. This equation can be seen as a compatibility condition <cit.> between the infinite-dimensional unitary Fock-space representation on the one-particle states and the representation of the homogeneous Lorentz group on the space of fields. In a straightforward manner, it can be brought down to compatibility equations for the polarisation functions in the standard momentum 𝐤
on the level of the Lie algebra of the little group
∑_σ'u^r(𝐤,σ')𝒥_σ'σ=∑_s J^rs u^s(𝐤,σ)
∑_σ'v^r(𝐤,σ')𝒥^*_σ'σ =-∑_s J^rsv^s(𝐤,σ) ,
which follows from the expansion 𝒟_σ'σ≈δ_σσ' + iθ𝒥_σσ', and L^rs≈δ^rs + iθ J^rs.
The polarisation functions in the standard momentum thus carry the representation of the little group, as do the one-particle states (<ref>), and also contain information about the quartic Casimir operator. By classifying polarisation functions of a certain field, through solving the eigensystem
∑_s D(W^2)^rs u^s(𝐤,σ) = w^2 u^r(𝐤,σ) ,
where w^2 on the right-hand side is constant due to (<ref>), we can learn about supported particle types, i.e. values of the Casimir operator W^2, associated with that particular field.
§ THE QUARTIC CASIMIR IN THE HERMITE EXPANSION
Our first goal is to build an explicit expression for (<ref>) for the case of the plane wave solution (<ref>) and then attempt to find possible eigenvalues. For the sake of the clarity of argument, we will demonstrate the procedure on the Maxwell field before following these steps for the MHSYM component field.
In the case of electrodynamics, a plane-wave solution of the equations
A^μ -∂^μ∂· A = 0 ,
with momentum oriented in the z direction k^μ = (ω, 0, 0, ω) is given by A^μ(x) =ϵ_±^μ e^ikx where ϵ_±^μ = (0,1,± i, 0).
Since A^μ(x) is a Lorentz vector, we can use the vector representation of the Lorentz generators (M^μν)^ρ_κ = i(η^μρδ^ν_κ - η^νρδ^μ_κ).
The quartic Casimir element (<ref>) is then explicitly given by
W^2 = ω^2
[ -2 0 0 2; 0 0 0 0; 0 0 0 0; -2 0 0 2; ] ,
so it is readily visible through applying (<ref>) that there is a single eigenvalue of the quartic Casimir, and it is vanishing (W^2)^μ_ν A^ν = 0 .
§.§ Quartic Casimir of the on-shell MHS field
As stated, the MHS field is in a mixed representation of the Lorentz group - a direct product of the finite-dimensional vector representation and the infinite-dimensional unitary representation. As in (<ref>), the generators will be a direct sum of two parts: one belonging to the finite-dimensional representation and one belonging to the infinite-dimensional representation, e.g.
(J_1)^m_0m_1m_2m_3_n_0n_1n_2n_3^a_b = (J_1)^m_0m_1m_2m_3_n_0n_1n_2n_3δ^a_b + δ^m_0m_1m_2m_3_n_0n_1n_2n_3(J_1)^a_b .
Using capital 𝐍 instead of the tuple {n_0n_1n_2n_3}
(J_1)^M_N^a_b = (J_1)^M_Nδ^a_b + δ^M_N(J_1)^a_b .
The Casimir element (<ref>) is then given by (repeated indices summed over)
(W^2)^M_N^a_c = ω^2(J_1^a_b J_1^b_c +J_2^a_b J_2^b_c+K_1^a_b K_1^b_c+K_2^a_b K_2^b_c-2J_1^a_bK_2^b_c +2K_1^a_b J_2^b_c)δ^M_N
+ ω^2(J_1^M_R J_1^R_N +J_2^M_R J_2^R_N + K_1^M_R K_1^R_N + K_2^M_R K_2^R_N- 2 J_1^M_R K_2^R_N + 2 K_1^M_R J_2^R_N)δ^a_c
+ ω^2( 2J_1^a_cJ_1^M_N + 2J_2^a_cJ_2^M_N + 2K_1^a_cK_1^M_N + 2K_2^a_cK_2^M_N
- 2K_2^a_cJ_1^M_N - 2K_2^M_NJ_1^a_c + 2K_1^a_cJ_2^M_N + 2K_1^M_NJ_2^a_c) .
The first bracket contains the finite-dimensional vector representation of the W^2, which is multiplied by δ^M_N. As in the case of a finite-dimensional massless vector field, this will give zero when acting on the polarisation vector ϵ^a found in (<ref>).
The mixed contributions to the Casimir can be rewritten as
(W^2_mixed)^M_N^a_c
= 2A^a_cA^M_N + 2B^a_cB^M_N ,
where A and B were defined in (<ref>). When A^a_c or B^a_c act on the polarisation vector ϵ^a, the result will be proportional to the momentum, e.g. for A^a_c
i(
[ 0 0 -1 0; 0 0 0 0; -1 0 0 1; 0 0 -1 0; ]) 1/√(2)[ 0; 1; ± i; 0 ] = ± 1/√(2)[ 1; 0; 0; 1 ]∝ p^a ,
i.e. a pure gauge contribution in the finite-dimensional sector.
A non-trivial eigenvalue of the quartic Casimir for the MHS field can thus come only from the second line of (<ref>), which contains the infinite-dimensional part
(W^2_inf)^M_Nδ^a_c = ω^2δ^a_c (J_1^M_R J_1^R_N + J_2^M_R J_2^R_N +K_1^M_R K_1^R_N + K_2^M_R K_2^R_N - 2K_2^M_R J_1^R_N + 2J_2^M_R K_1^R_N) .
We now use the explicit expressions for the infinite-dimensional generators (<ref>-<ref>) and arrive at the result written out without the use of the compact notation.
(W^2_inf)^m_0m_1m_2m_3_n_0n_1n_2n_3 = ω^2δ_-n_0 + n_1 + n_2 + n_3^-m_0 + m_1 + m_2 + m_3×
( 2δ^m_0_n_0δ^m_1_n_1δ^m_2_n_2δ^m_3_n_3(1+n_0+n_3)(1+n_1+n_2)
- δ^m_0_n_0δ^m_1_n_1δ^m_2_n_2+2δ^m_3_n_3-2√((n_2+1)(n_2+2)(n_3-1)n_3)
- δ^m_0_n_0δ^m_1_n_1δ^m_2_n_2-2δ^m_3_n_3+2√((n_2-1)n_2(n_3+2)(n_3+1))
- δ^m_0_n_0δ^m_1_n_1-2δ^m_2_n_2δ^m_3_n_3+2√((n_1-1)n_1(n_3+1)(n_3+2))
- δ^m_0_n_0δ^m_1_n_1+2δ^m_2_n_2δ^m_3_n_3-2√((n_1+1)(n_1+2)(n_3-1)n_3)
- δ^m_0_n_0+2δ^m_1_n_1+2δ^m_2_n_2δ^m_3_n_3√((n_0+1)(n_0+2)(n_1+1)(n_1+2))
- δ^m_0_n_0-2δ^m_1_n_1-2δ^m_2_n_2δ^m_3_n_3√((n_0-1)n_0(n_1-1)n_1)
- δ^m_0_n_0+2δ^m_1_n_1δ^m_2_n_2+2δ^m_3_n_3√((n_0+1)(n_0+2)(n_2+2)(n_2+1))
- δ^m_0_n_0-2δ^m_1_n_1δ^m_2_n_2-2δ^m_3_n_3√((n_0-1)n_0(n_2-1)n_2)
+ 2δ^m_0_n_0+1δ^m_1_n_1δ^m_2_n_2 + 2δ^m_3_n_3 - 1√((n_0+1)(n_2+1)(n_2+2)n_3)
- 2δ^m_0_n_0+1δ^m_1_n_1δ^m_2_n_2δ^m_3_n_3 + 1(√((n_0+1)n_2n_2(n_3+1))+.
. +√((n_0+1)(n_1+1)(n_1+1)(n_3+1)))
- 2δ^m_0_n_0-1δ^m_1_n_1δ^m_2_n_2δ^m_3_n_3 - 1(√(n_0(n_2+1)(n_2+1)n_3)+√(n_0n_1n_1n_3))
+ 2δ^m_0_n_0-1δ^m_1_n_1δ^m_2_n_2 - 2δ^m_3_n_3 + 1√(n_0(n_2-1)n_2(n_3+1))
+ 2δ^m_0_n_0+1δ^m_1_n_1+2δ^m_2_n_2δ^m_3_n_3 - 1√((n_0+1)(n_1+1)(n_1+2)n_3)
+ 2δ^m_0_n_0-1δ^m_1_n_1-2δ^m_2_n_2δ^m_3_n_3 + 1√(n_0(n_1-1)n_1(n_3+1))) .
When acting on the field polarisation factors p^n_0n_1n_2n_3 in (<ref>), we get
(W^2_inf) ^m_0m_1m_2m_3_n_0n_1n_2n_3 p^n_0n_1n_2n_3=
2 p^m_0m_1m_2m_3[(1+m_0+m_3)(1+m_1+m_2)]
- p^m_0m_1(m_2-2)(m_3+2)√((m_2-1)m_2(m_3+1)(m_3+2))
- p^m_0m_1(m_2+2)(m_3-2)√((m_2+1)(m_2+2)m_3(m_3-1))
- p^m_0(m_1+2)m_2(m_3-2)√((m_1+1)(m_1+2)(m_3-1)m_3)
- p^m_0(m_1-2)m_2(m_3+2)√((m_1-1)m_1(m_3+1)(m_3+2))
- p^(m_0-2)(m_1-2)m_2m_3√((m_0-1)m_0(m_1-1)m_1)
- p^(m_0+2)(m_1+2)m_2m_3√((m_0+1)(m_0+2)(m_1+1)(m_1+2))
- p^(m_0-2)m_1(m_2-2)m_3√((m_0-1)m_0m_2(m_2-1))
- p^(m_0+2)m_1(m_2+2)m_3√((m_0+1)(m_0+2)(m_2+1)(m_2+2))
+2 p^(m_0-1)m_1(m_2-2)(m_3+1)√(m_0(m_2-1)m_2(m_3+1))
-2 p^(m_0-1)m_1m_2(m_3-1)√(m_0m_2m_2m_3))
-2 p^(m_0+1)m_1m_2(m_3+1)√((m_0+1)(m_2+1)(m_2+1)(m_3+1))
+2 p^(m_0+1)m_1(m_2+2)(m_3-1)√((m_0+1)(m_2+1)(m_2+2)m_3)
-2 p^(m_0-1)m_1m_2(m_3-1)√(m_0(m_1+1)(m_1+1)m_3)
+2 p^(m_0-1)(m_1-2)m_2(m_3+1)√(m_0(m_1-1)m_1(m_3+1))
+2 p^(m_0+1)(m_1+2)m_2(m_3-1)√((m_0+1)(m_1+1)(m_1+2)m_3)
-2 p^(m_0+1)m_1m_2(m_3+1)√((m_0+1)m_1m_1(m_3+1)) .
§.§ Casimir eigenvalue problem
To learn about the particle spectrum of our theory, following (<ref>), we should solve the eigensystem
(W^2_inf)^m_0m_1m_2m_3_n_0n_1n_2n_3 p^n_0n_1n_2n_3 = w^2 p^m_0m_1m_2m_3 .
Even before explicitly trying to find eigenvectors and eigenvalues in (<ref>), we can conclude from (<ref>) that there will exist non-trivial states, i.e. the expression (<ref>) shows that a polarisation factor p^n_0n_1n_2n_3 used in (<ref>) will in general not give a vanishing eigenvalue, through which we can confirm that the MHS formalism supports a description of the infinite spin particles.
One way of tackling the eigenvalue problem is by computer-assisted iterative solving, which could give us a hint for an appropriate ansatz. It can be seen by the structure of the Casimir that any eigenvector should necessarily have an infinite number of components
(terms such as δ^m_0_n_0-1δ^m_1_n_1δ^m_2_n_2δ^m_3_n_3 - 1 will always simultaneously raise the values of the 0th and 3rd index; thus a closed solution cannot have a finite number of terms),
so we could only hope for hints coming from a truncated calculation. The Casimir operator can be rewritten in a basis of eigenvectors of J_3, which we found in <cit.>, but the complexity of the problem remains. So far, we are left with finding educated guesses, and one of them comes from the “massless limit” of eigenvectors of J⃗^2.
§.§.§ Massless limit of massive states
In the case of massive particles, the little group is SO(3), and the Casimir operator is simply J⃗^2. An appropriately performed Inönü-Wigner contraction of a representation of SO(3) can give us a representation of ISO(2), which is the little group in the case of massless particles. The limiting procedure entails the limits m→0,v→ 1 while keeping fixed mγ = m/√(1-v^2) = ω. In the representation by Hermite functions, we have
(J⃗)^2 ^m_0m_1m_2m_3_n_0n_1n_2n_3 = δ^-m_0+m_1+m_2+m_3_-n_0+n_1+n_2+n_3δ^m_0_n_0×
(2 δ^m_1_n_1δ^m_2_n_2δ^m_3_n_3(n_1+n_2+n_3 + n_1n_2 + n_2n_3 + n_3n_1)
- δ^m_1_n_1δ^m_2_n_2-2δ^m_3_n_3+2√((n_2-1)n_2(n_3+1)(n_3+2))
- δ^m_1_n_1δ^m_2_n_2+2δ^m_3_n_3-2√((n_2+1)(n_2+2)(n_3-1)n_3)
- δ^m_1_n_1+2δ^m_2_n_2δ^m_3_n_3-2√((n_3-1)n_3(n_1+1)(n_1+2))
- δ^m_1_n_1-2δ^m_2_n_2δ^m_3_n_3+2√((n_3+1)(n_3+2)(n_1-1)n_1)
- δ^m_1_n_1-2δ^m_2_n_2+2δ^m_3_n_3√((n_1-1)n_1(n_2+1)(n_2+2))
- δ^m_1_n_1+2δ^m_2_n_2-2δ^m_3_n_3√((n_1+1)(n_1+2)(n_2-1)n_2) ) .
The simplest simultaneous eigenvector of J⃗^2 (<ref>) and J_3 (<ref>) in the representation over Hermite functions is
Φ_n=0,s=0,λ=0(u) =δ^n_0_0 δ^n_1_0δ^n_2_0δ^n_3_0f_n_0n_1n_2n_3(u) ,
where n=n_1+n_2+n_3 and s corresponds to the eigenvalue of J⃗^2=s(s+1) while λ is an eigenvalue of J_3. We can boost (<ref>) with velocity v in the z direction to prepare it for the massless limit.
The transformation matrices for a finite boost are (see <cit.> and (<ref>))
L_n_0n_1n_2n_3^m_0m_1m_2m_3 (v) = (-1)^m_0+m_3√(m_3!n_0!/n_3!m_0!)δ_-n_0+n_1+n_2+n_3^-m_0 + m_1+ m_2 + m_3δ^m_1_n_1δ^m_2_n_2×
∑_j=0^m_0m_0jn_3m_3-j(-1)^j√(1-v^2)^m_3+m_0+1-2jv^2j-m_3+n_3 .
Boosting the chosen eigenvector we get
∑_n_0,n_1,n_2,n_3=0^∞ L_n_0n_1n_2n_3^m_0m_1m_2m_3(v)·δ^n_0_0 δ^n_1_0δ^n_2_0δ^n_3_0=(-1)^m_0δ^m_1_0δ^m_2_0δ^m_0, m_3√(1-v^2)(-v)^m_3 .
Since mγ→ω while v→ 1, we can divide this result by m to obtain a useful result in the limiting procedure
p^n_0n_1n_2n_3→δ^m_1_0δ^m_2_0δ^m_0,m_3 .
A more general case could be an eigenvector of J_3 and J^2 of the form
Φ_n=n,s=n,λ=n(u) = z^(n_0)C^n_1n_2_(n,n)δ^n_3_0 f_n_0n_1n_2n_3(u) .
In the sector n_1+n_2+n_3 = n where s=n and λ = n, there is only one such vector, and the factor C^n_1n_2_(n,n) is given in (<ref>). The index n_3 has to be equal to 0, while n_0 is arbitrary (or fixed by a choice of N=-n_0+n), meaning that the factor z^(n_0) must have the form
z^(n_0) = const. ·δ^n_0_n-N .
We boost (<ref>) in the z direction to prepare it for the massless limit
L_n_0n_1n_2n_3^m_0m_1m_2m_3(v)z^(n_0)C^n_1n_2_n,nδ^n_3_0 = (-1)^m_0√(m_0!/m_3!(m_0-m_3)!)
z^m_0-m_3C^m_1m_2_(m_1+m_2,m_1+m_2)√(1-v^2)^m_0-m_3+1(-v)^m_3 .
§.§.§ Eigenvector candidates
The result in (<ref>) motivates an ansatz of the form
p^n_0n_1n_2n_3 = δ^n_0,n_3C^n_1n_2_r,rc^(n_3) .
Upon inserting into (<ref>) we obtain two independent equations for c^(m_3)
c^(m_3)(1+2m_3)-c^(m_3-1)m_3-c^(m_3+1)(m_3+1) = -w^2/2(1+r) c^(m_3) ,
c^(m_3+2)+c^(m_3)-2c^(m_3+1)=0 .
The solution is given by w^2 = 0 and
c^(m_3)= c^(0) .
This gives the polarisation factor
p^n_0n_1n_2n_3 = δ^n_0,n_3C^n_1n_2_(r,r) ,
which is simultaneously an eigenvector of J_3 with helicity λ = r.
The norm of this solution is not finite, and we can explicitly see that in the sum over Hermite functions in the auxiliary space, the eigenvector will contain a delta function.
As an example with r=0, from the completeness identity of Hermite functions, we find
∑_n_0,n_1,n_2,n_3=0^∞δ^n_0,n_3δ^n_1_0δ^n_2_0f_n_0(u_0)f_n_1(u_1)f_n_2(u_2)f_n_3(u_3) = δ^(2)(u_0 - u_3)e^-u_1^2+u_2^2/2 .
Through this approach, we were able to obtain a solution to the equation (<ref>) with a vanishing eigenvalue of the quartic Casimir W^2 and with an arbitrary integer helicity. This would, by definition, correspond to an ordinary higher-spin massless field. Observe that this solution is not square integrable, which means that if it is present in the physical spectrum, it must belong to a continuous spectrum. To see this, a different approach will be used for the complete characterisation of the particle spectrum, which we show in the next section.
§ THE QUARTIC CASIMIR FOR AN ON-SHELL MASTER FIELD
Consider the linearised equations of motion one obtains if the integration over the auxiliary space is not performed prior to extremising the action
h_a(x,u) - ∂_a ∂^b h_b(x,u) = 0 .
As in the previous section, we can fix the gauge to ∂^a h_a(x,u) = 0, and consider solutions representing plane waves directed along the z-axis
h_a(x,u) = ϵ_a Φ(u) e^ikx ,
where k^a = (ω,0,0,ω) and ϵ_a = 1/√(2)(0,1,± i, 0).
Now, let's consider an active Lorentz transformation following the transformation properties (<ref>)
h'_a(x^c,u_d) = Λ_a^b h_b((Λ^-1x)^c,(u·Λ)_d) .
If we expand the Lorentz transformation matrix up to the first-order
Λ^a_b ≈δ^a_b + i ψ G^a_b ,
with ψ the expansion parameter
(the parameter can be an angle if Λ is a rotation, or rapidity in case of boosts),
then (Λ^-1)^a_b ≈δ^a_b - i ψ G^a_b and since (Λ^-1)^a_b = Λ_b^a it is true that G^a_b = - G_b^a. Through a simple expansion, we get
h'_a(x^c,u_d)
≈ h_a(x,u) + iψ (G_a^bh_b(x,u) + G_c^b x^c∂_b^xh_a(x,u) + G^b_cu_b∂^c_u h_a(x,u)) .
In the case of our solution (<ref>), the action of a generator of the Lorentz group, where D(G) is a representation of the generator G, becomes
D(G)· h_a(x,u) = (G_a^bϵ_b Φ(u) + G_c^bx^cik_bΦ(u) + G^b_c u_a∂^c_uΦ(u)ϵ_b)e^ikx .
We would now like to examine the behaviour of (<ref>) under the action of the generators A,B of the little group 𝔦𝔰𝔬(2) with the reference momentum k^a = (ω,0,0,ω). If we are able to find eigenfunctions of the mentioned generators, they will be the on-shell basis for the representation of the little group. Since A,B commute, their eigenfunctions will correspond to the plane-wave basis of 𝔦𝔰𝔬(2) seen in <cit.>.
It is straightforward to find the explicit vector representations for the operators A = J_1 - K_2 and B = J_2 + K_1.
A = ω[ 0 0 i 0; 0 0 0 0; i 0 0 i; 0 0 -i 0; ], B = ω[ 0 -i 0 0; -i 0 0 -i; 0 0 0 0; 0 0 i 0; ] .
We see from (<ref>) the three possible terms, of which only one will be non-trivial. Since A^a_b k^b = B^a_b k^b = 0, and A^a_b ϵ^b ∝ k^b, B^a_b ϵ^b ∝ k^b, which is a pure gauge contribution, the only important term in the equation (<ref>) in the case of the generators A and B is the last one, of the form G^b_c u_b∂^c_uΦ(u). We now explicitly state the differential equations for A and B.
A·Φ(u) = u_a A^a_b ∂_u^bΦ(u)
= iω (u_t-u_z)∂/∂ u_y + i ω u_y(∂/∂ u_t + ∂/∂ u_z) Φ(u) .
In null-coordinates u_+ = u_t + u_z, u_- = u_t - u_z it becomes somewhat simpler
A·Φ(u) =iω[u_-∂/∂ u_y + 2u_y∂/∂ u_+]Φ(u_+,u_-,u_x,u_y) .
The equation for B is similarly
B ·Φ(u) = -iω[u_-∂/∂ u_x + 2u_x ∂/∂ u_+]Φ(u_+,u_-,u_x,u_y) .
Similarly to (<ref>)
we want to find functions Φ(u) that satisfy the eigensystem
A·Φ(u) = α Φ(u)
B ·Φ(u) = β Φ(u) .
The solutions to these equations for A and B separately are
Φ_A(u) = exp(-i α u_y/ω u_-)G_1(u_-, u_x, -(u_t)^2 + (u_y)^2 + (u_z)^2)
Φ_B(u) = exp(i β u_x/ω u_-)G_2(u_-, u_y, -(u_t)^2 + (u_x)^2 + (u_z)^2) ,
where G_1 and G_2 are arbitrary functions of their respective variables. We can write down a simultaneous solution with G_r an arbitrary function as
Φ_αβ r(u) = exp(iβ u_x - α u_y/ω u_-)G_r(u_-,u_μ u^μ) ,
where α and β stand for the eigenvalues of A and B, and r stands for any additional indices that may be used to discriminate between different solutions. The explicit representation for W^2 is
W^2 = A^2 + B^2
= -ω^2(u_-^2 (∂^2/∂ u_x^2 + ∂^2/∂ u_y^2) + 4u_-(u_x∂/∂ u_x +u_y∂/∂ u_y + 1)∂/∂ u_+ + 4(u_x^2 + u_y^2)∂^2/∂ u_+^2)
and we can immediately see that solutions (<ref>) are eigenfunctions of the Casimir operator
W^2 ·Φ(u) = (α^2+β^2) Φ(u) = w^2Φ(u) .
As expected from the properties of the little group, the eigenvalues of the Casimir W^2 are non-negative. Analogous solutions were obtained in <cit.> in examining a scalar master field as a wave function of the continuous spin particle. The analysis here tells us that, due to gauge invariance (<ref>), having the (frame) vector index on a master field does not change the result for the form obtained in <cit.> for the scalar master field.
A complete orthonormal basis in the auxiliary space can be built from functions of the form (<ref>) for a specific choice of the standard momentum. One possibility is to define
f_αβ nl(u) =1/√(2π^2)exp(iβ u_x - α u_y/ω u_-)h_n(ω u_-)h_l (ω u_-u^2) ,
where h_n(x) are any orthonormal and complete functions defined on ℝ, such as Hermite functions.
For α = β = 0, one has particle states with the vanishing eigenvalue of the second Casimir, which means that they belong to standard massless IRREP in the Wigner classification. We now see explicitly how such states appear inside the continuous spectrum that is parametrised by the eigenvalue of the second Casimir w^2 = α^2 + β^2.
We prove that the functions f_αβ nl(u) are orthonormal:
∫ d^4u f_α'β'n'l'(u)^* f_αβ nl(u) = 1/(2π)^2 ∫_-∞^∞ du_- h_n'(ω u_-)^* h_n(ω u_-)
× ∫_-∞^∞ du_1 ∫_-∞^∞ du_2 e^-i(α-α') u_2 - (β-β') u_1/ω u_-
× ∫_-∞^∞ du_+ h_l'(ω u_- u^2)^* h_l(ω u_- u^2)
.
We can use a substitution
w ≡ω u_+ u^2 = ω u_+ (-u_+ u_- + u_1^2 + u_2^2)
to write the third integral as
∫_-∞^∞ du_+ h_l'(ω u_+ u^2)^* h_l(ω u_+ u^2) = 1/ω (u_-)^2∫_-∞^∞ dw h_l'(w)^* h_l(w)
= δ_l'l/ω (u_-)^2 .
The second integral gives
∫_-∞^∞ du_1
∫_-∞^∞ du_2 e^-i(α-α') u_2 - (β-β') u_1 /ω u_- = (2πω u_- )^2 δ(α'-α) δ(β'-β)
and in the first integral, we have
∫_-∞^∞ du_- h_n'(ω u_-)^* h_n(ω u_-) = 1/ω^2δ_nn' .
Finally, we confirm that the basis functions are orthonormal
∫ d^4u f_α'β'n'l'(u)^* f_αβ nl(u) = δ(α'-α) δ(β'-β) δ_l'l δ_n'n .
We can also prove that the choice (<ref>) is complete
∑_n=0^∞∑_l=0^∞∫_-∞^∞ dα∫_-∞^∞ d β f_αβ nl(u')^* f_αβ nl(u) =
1/2π^2∑_l h_l(ω u_-' u^' 2)^* h_l(ω u_- u^2)
×∫_-∞^∞ d α∫_-∞^∞ dβ e^iα (u_2'-u_2)-β (u_1'-u_1)/ω u_-∑_n h_n(ω u'_-)^* h_n(ω u_-) .
Elementary functions such as Hermite satisfy completeness relations
∑_n=0^∞ h_n(ω u'_-)^* h_n(ω u_-) = δ(ω(u'_–u_-)) = 1/|ω| δ(u_–u'_-)
∑_l=0^∞ h_l(ω u_-' u^' 2)^* h_l(ω u_- u^2) = δ(ω u_- u^2 - ω u_-' u^' 2) .
With the exponential functions, we have
∫_-∞^∞ dβ e^iβ (u_1-u'_1)/ω u_- = 2π |ω u_-| δ(u_1-u'_1)
∫_-∞^∞ dα e^iα (u'_2-u_2)/ω u_- = 2π |ω u_-|δ(u_2-u'_2) .
We can insert the results into (<ref>) and obtain
∑_n ∑_l ∫ dα∫ dβ f_αβ nl(u')^* f_αβ nl(u) =
=
2 ω (u_-)^2 δ(u_–u'_-) δ(u_1-u'_1) δ(u_2-u'_2) δ(ω u_- u^2 - ω u_-' u^' 2)
= 2 δ(u_–u'_-) δ(u_1-u'_1) δ(u_2-u'_2) ω (u_-)^2 δ(ω(u_-)^2 (u'_+-u_+))
= 2 δ(u_–u'_-) δ(u_1-u'_1) δ(u_2-u'_2) δ(u'_+-u_+)
= δ^4(u-u') ,
which is the completeness relation.
The indices n,l are little-group invariant, as well as α^2+β^2, so we conclude that the content of the massless theory is two polarisations × infinite × infinite number of a continuous number of infinite-spin particles. On shell, a classical solution corresponding to a non-vanishing value of the quartic Casimir W^2 = α^2 + β^2 can be chosen as e.g.
h^±_a(αβ n l k)(x,u) = ϵ^±_a f_αβ nl(k,u) e^ikx ,
where we have emphasised that the polarisation functions have an implicit dependence on the momentum k^μ. To construct a field variable which is square integrable in the auxiliary space, we can form a superposition such as
h_a(nlk)^±(x,u)=∫ d α d β c(α,β) h^±_a(αβ n l k)(x,u) ,
where ∫ d α d β |c(α,β)|^2 < ∞. Such a superposition goes over various values of w^2 = α^2+β^2, reminiscent of unparticle physics <cit.> where fields can be thought of as having a continuous distribution of mass <cit.>.
§ DISCUSSION
We explore the particle content of massive and massless free theories of a master field, assuming the requirement that it is square integrable in the auxiliary space. The motivation for such a requirement is that: 1) it ensures a well-defined integration over the auxiliary space without using a nonstandard measure of integration that could violate MHS gauge invariance, 2) it ensures that auxiliary space integration gives a finite value which is required to have finiteness of observables such as energy, 3) it ensures the unitarity of the representation of the part of the Lorentz group that acts on the auxiliary space.
The consequence is that it leads to nonstandard methods of unpacking the spacetime content of the master field. We examine various choices of bases in the auxiliary space where the coefficients of expansion serve as the usual spacetime fields. The technical difficulty is that the representations of the Lorentz group are infinite-dimensional. The result is that in the massless case, the theory contains continuous spin particles in different Poincaré representations specified by two discrete and one continuous label (the continuous being the eigenvalue of the quartic Casimir of the Poincaré group).
In the massive case, the theory contains an infinite number of towers (specified by two discrete labels) of particles containing all spins.
The work in this paper is based on unpublished research initially presented in the doctoral dissertation of M.P. <cit.> and extends beyond with
further investigations and expansions. We thank Loriano Bonora as this work evolved from our mutual collaboration.
The research of P.D.P. has been supported by the University of Rijeka under the project uniri-prirod-18-256 and uniri-iskusni-prirod-23-222. The research of M.P. has been supported by the University of Rijeka under the project uniri-mladi-prirod-23-43 3159.
The research of S.G. has been supported by a BIRD-2021 project (PRD-2021), the PRIN Project n. 2022ABPBEY, “Understanding quantum field theory through its deformations” and the PRIN 2022 project CONTRABASS (contract n.2022KB2JJM).
§ MASSIVE CASE
§.§ Particle spectrum
Let us now study the case of massive MHS fields. Apart from a massive MHS matter sector, one can also provide mass to the MHS potential h_a(x,u) by coupling it to the Higgs field in the standard fashion (see <cit.>). We shall use the simplest case of a scalar field for the purpose of a detailed demonstration, and then generalise the results to vector (and tensor) MHS fields
S_0[φ] = 1/2∫ d^4x d^4u ( ∂^x_μφ(x,u) ∂_x^μφ(x,u) - m^2 φ(x,u)^2 ) .
For the moment, there is no difference in the treatment of the massive and massless cases. If we expand the MHS master field by using a complete orthonormal basis in the u-space {f_r(u)} consisting of real square-integrable functions (say, by using d-dimensional Hermite functions)
φ(x,u) = ∑_r φ_r(x) f_r(u)
and integrate over u, we obtain the following purely spacetime action
S_0[φ] = ∑_r 1/2∫ d^4x ( ∂_μφ_r(x) ∂^μφ_r(x)
- m^2 (φ_r(x)^2 ) .
We have obtained a purely spacetime formulation of the theory (which we can extend to the interacting regime) which takes the form of an (infinite) collection of free Klein-Gordon fields all having the same mass m. Using this observation we can quantise the theory in the usual way, following the prescription for quantising a set of independent free Klein-Gordon fields to write the quantised spacetime fields as
φ_r(x) = ∫d^3𝐩/√((2π)^3 ω_𝐩)( a_r(p) e^-i p· x
+ a_r(p)^† e^i p· x) , p^0 = ω_𝐩≡√(𝐩^2 + m^2) .
Here a_r(p)^† (a_r(p)) is the creation (destruction) operator of the type-r quasi-particle carrying 4-momentum p^μ. These operators can be used to construct the Hilbert space of states in the form of the Fock space. E.g., one-particle states are
| p ; r ⟩≡ a_r(p)^† | 0 ⟩ , ⟨ q ; r | p ; s ⟩ = ω_𝐩δ_rs δ^3(𝐪 - 𝐩) ,
where the vacuum state is defined in the usual way
a_r(p) | 0 ⟩ = 0 .
Different choices for the orthonormal basis lead to the unitarily equivalent descriptions with the same vacuum state and, correspondingly, the same Hilbert space of states.
Since the fields φ_r(x) transform (analogously to (<ref>)) as
φ'_r(x) = U(Λ) φ_r(x) U(Λ)^† = ∑_s D_rs(Λ^-1) φ_s(Λ x) ,
using (<ref>) we see that the quasi-particle states transform under Lorentz transformations as
U(Λ) | p ; r ⟩ = ∑_s D_rs(Λ^-1) | Λ p ; s ⟩
= ∑_s D_sr(Λ) | Λ p ; s ⟩ .
As noted in the main text, matrices D(Λ) constitute an infinite dimensional unitary representation of the Lorentz group.
We also note that such a representation is reducible (since if one uses d-dimensional Hermite functions
f_r(u) = h_{n_μ}(u) ≡ h_n_0(u_0) h_n_1(u_1) h_n_2(u_1) h_n_3(u_1) , n_μ = 0,1,2,…
as a basis, then the subspaces with N = n_1 + n_2 + n_3 - n_0 form sub-representations of the Lorentz group).
This is in contrast to the idea that particle-type designation should be a Lorentz-invariant designation.
As shown above in the section <ref>, it is equivalent to examining either the polarisation functions of solutions to the equations of motion or directly the one-particle states to characterise the particle content of a theory. In the appendix, we choose to work with one-particle states. Therefore we also note that different types of bases used in the main text correspond to different types of particles in the following sense: a momentum-independent basis, such as the product of Hermite polynomials, gives rise to one-particle states that we refer to as quasi-particle states since they do not possess a Lorentz-invariant designation, and that the choice of a momentum dependent basis such as g_n_0 n s σ(u) (see below) gives rise to one-particle states which do posses a Lorentz-invariant designation.
To analyse the content of the one-particle sector, spanned by (<ref>), as in the massless case, we need to write it as a direct sum of IRREPs of the Poincaré group by using Wigner's little group construction.
In effect, we need to obtain polarisation functions (i.e. the basis functions in the auxiliary space) appropriate for the massive case.
First, we choose a special 4-momentum tuned to the massive case
k^μ = (m,0,0,0) .
The subgroup of the Lorentz group, which keeps k invariant (little group), is the SO(3) group of rotations. We know that IRREPs of the Poincaré group, in this case, are classified by the spin s=0,1,2,…, which denote (unique) (2s+1)-dimensional representations.
Since the basis states transform as in (<ref>), the problem can be transcribed into the problem of diagonalising the action of rotations, given by
f_r(Λ^-1 u) = ∑_s D_rs(Λ^-1) f_s(u)
over the space of L_2(ℝ^4) functions on the auxiliary space. But this is a well-known problem whose solution is to take the basis built upon the spherical harmonics. For example, one can choose the orthonormal basis in the following way
g_n_0 n s σ(u) = h_n_0(u_0) R_ns(u_r) e^-u_r^2/2 u_r^s Y^σ_s(u_θ,u_ϕ) ,
where u_0, u_r, u_θ and u_ϕ are the spherical coordinates of the auxiliary space, {h_n_0, n_0=1,2,…} are Hermite functions, {Y_l^m, l=0,1,…; m=-l,…,l} are spherical harmonics, and {R_nl, n=0,1,2,…} are real polynomials of the order 2n satisfying
∫_0^∞ du_r u_r^2(l+1) e^-u_r^2 R_nl(u_r) R_n'l(u_r) = δ_nn'
and the corresponding completeness condition. Note that this basis is not real
g_n_0 n s σ(u)^* = g_n_0 n s -σ(u) .
This basis transforms under SO(3) rotations as
g_n_0 n s σ(R^-1 u) = 𝒟_σσ'^(s)(R^-1) g_n_0 n s σ'(u)
where 𝒟_σσ'^(s)(R) are the usual spin-s rotation matrices. If we define creation operators
a_n_0 n s σ(k)^† by
∑_r f_r(u) a_r(k)^† = ∑_n_0=0^∞∑_n=0^∞∑_s=0^∞∑_σ=-s^s
g_n_0 n s σ(u) a_n_0 n s σ(k)^†
then they transform under the rotations as
U(R) a_n_0 n s σ(k)^† U(R)^† = ∑_σ'=-s^s
𝒟_σ' σ^(s)(R) a_n_0 n s σ'(k)^† .
The vacuum and one-particle states carrying momentum p=k given in (<ref>) are defined by
| k , σ; n_0, n, s ⟩ = a_n_0 n s σ(k)^† | 0 ⟩ ,
a_n_0 n s σ(k) | 0 ⟩ = 0 .
These states transform under rotations as
U(R) | k , σ; n_0, n, s ⟩ = ∑_σ'=-s^s 𝒟_σ' σ^(s)(R) | k , σ'; n_0, n, s ⟩ ,
from which it follows that
𝐉^2 | k , σ; n_0, n, s ⟩ = s(s+1) | k , σ; n_0, n, s ⟩ ,
J_z | k , σ; n_0, n, s ⟩ = σ | k , σ; n_0, n, s ⟩ ,
which means that they describe particle states of spin s. The one-particle subspace with momentum k is the direct sum of particles labelled by the triplet of numbers (n_0,n,s) with n_0, n, s = 0,1,2,… .
To complete the description we must construct one-particle states with generic on-shell momentum p, which can be done in the standard fashion
| p , σ; n_0, n, s ⟩≡ U(Λ(p)) | k , σ; n_0, n, s ⟩ , p = Λ(p) k ,
where Λ(p) is a pure boost.
In summary, the massive theory contains an infinite number of standard massive particles labelled by the (little group invariant) triplet of numbers (n_0,n,s) with n_0, n, s = 0,1,2,…. We have an infinite × infinite number of towers of particles containing all spins.
§.§.§ Higher-tensor massive case
What if we have an MHS master field which is not scalar? Let us use a massive vector MHS master field h_a(x,u) as an example (it could be the MHS potential if the MHS symmetry is spontaneously broken.). Using the expansion
h^a(x,u) = ∑_r h^a_r(x) f_r(u) ,
the free field theory boils down to a collection of spacetime fields h^a_r(x) all satisfying the Proca equation with the same mass m. These fields transform under the Lorentz group as
h^a'_r(x) = (Λ)^a_b ∑_s D_rs(Λ) h^b_s(Λ^-1 x) .
By quantising Proca spacetime fields in the usual way one gets the one particle spectrum consisting of states
| p , σ_1 ; r ⟩ ,
where σ_1 = -1, 0, 1 is the index belonging to the spin-1 unitary IRREP of SO(3) little group. The whole procedure applied for the scalar master field can be repeated here with the only difference that when block-diagonalising rotation matrices, we have to take into account that spin coming from auxiliary space is here multiplied with the spin-1 state coming from the vectorial property of the master field. Here one uses the standard formula for a direct product of two SO(3) unitary IRREPs to obtain that in the classification of Lorentz particles, instead of a single particle of spin s we will have particles of spin
1 ⊗ s = (s+1) ⊕ s ⊕ (s-1) , s≥1 1 ⊗ 0 = 1 .
The conclusion is that for every particle of spin s=j present in the spectrum of the free massive scalar master field, one obtains a “triplet” of particles with the spins s=j+1,j,j-1 for j≥1, and s=0,1 for j=0. The result has an obvious generalisation to higher-rank tensor master fields.
apsrev4-1
|
http://arxiv.org/abs/2409.02221v1 | 20240903184748 | A Study On The Graph Formulation Of Union Closed Conjecture | [
"Nived J M"
] | math.CO | [
"math.CO",
"05C35 (Primary) 05D05 (Secondary)"
] |
A Digital signature scheme based on Module-LWE and Module-SIS
Abbas Maarefparvar
=============================================================
§ ABSTRACT
The Union Closed Conjecture, posed by Peter Frankl in 1979, is one of the most renowned problems in Combinatorics. Its appeal stems from the simplicity of its statement and the potential complexity of its solution. The conjecture asserts that in any union-closed family of sets, there exists an element that belongs to at least half of the sets in the family.
This paper explores the graph-theoretic formulation of the conjecture and establishes connections between the set-based and graph-based formulations. Through this connection, we derive new results and provide proofs for specific classes of graphs. Additionally, by analyzing the distribution of pendant vertices within various graphs, we demonstrate the validity of the conjecture for a broader range of graph classes.
Keywords: Union Closed Conjecture, Frequency, Family of sets, Member sets, Pendant vertices.
§ INTRODUCTION
The Union Closed Conjecture, colloquially known as Frankl's Conjecture, was initially formulated by Peter Frankl in 1979. The conjecture pertains to a class of sets referred to as union closed families, wherein the union of any two member sets remains a member set itself. Specifically, the conjecture posits that for every finite union closed family of sets, excluding families comprising solely the empty set, there exists an element belonging to at least half of the member sets. In this context, a finite union closed family is defined as having both a finite number of member sets and finite cardinality for each member set.
A natural extension of this conjecture to infinite cases was considered, but Poonen <cit.> identified counterexamples such as families with member sets of the form {i,i+1,i+2,...} where i ∈ℕ. In this scenario, each element i has finite frequency, challenging the conjecture's applicability to families with an infinite number of member sets. Furthermore, a preference is expressed for member sets with finite cardinality.
The notion of separation within a family is introduced, wherein elements x and y are considered separated by the family 𝒜 if there exists a member set A ∈𝒜 such that either x or y is an element of A, but not both. A family 𝒜 is termed a separating family if any two distinct elements are separated by 𝒜. Notably, a pair of non-separated elements can be treated as a single element without affecting the conjecture's determining factors namely, the cardinality of 𝒜 and the possible frequencies of elements. Thus, the conjecture only needs verification for all finite separating families.
In Section <ref>, we present the necessary notations, definitions, and formulations to address the problem. We explore both the set formulation and graph formulation of the conjecture, and also discuss some previously known results in this area. Moving on to Section <ref>, we establish the connection between the two formulations and provide equivalent versions of existing results. Additionally, we demonstrate the validity of the conjecture for certain graph classes. Finally, in Section <ref>, we introduce an intriguing theorem that proves the conjecture for graphs with pendant vertices on specific locations, and discuss some of its consequences.
§ PRELIMINARIES
We will introduce certain notations and endeavor to mathematically analyze the given problem. We say that a given family of sets 𝒜 is union closed if, for every pair of the member sets A,B ∈𝒜 we have A∪ B∈𝒜. The union of all the member sets of 𝒜 is called the universe of 𝒜, represented by U(𝒜).
U(𝒜)=⋃_A∈𝒜A={x∈ A :A∈𝒜}
The frequency of an element x∈U(𝒜) denoted by μ(x), is the number of member sets of 𝒜 containing x. Using the above notations, we can state the Frankl's conjecture as:
*conjecture*Union Closed Conjecture
If 𝒜 is a finite union closed family of sets with 𝒜≠{ϕ}, then there exists x ∈U(𝒜) such that μ(x) ≥1/2|𝒜|. We call x an abundant element.
Similar to a union closed family, we define an intersection-closed family as a collection of sets closed under intersection. Now, using intersection closed families, we can reformulate the Union Closed Conjecture.
*ConjectureIntersection Closed Conjecture
Any finite intersection closed family with at least two member sets has an element which is part of at most half of the member sets. That is, if ℬ is an intersection closed family of sets with |ℬ| ≥ 2, then there exists x ∈U(ℬ) satisfying μ(x) ≤1/2|ℬ|. We call x a rare element.
<cit.>
Union closed conjecture is equivalent to the Intersection closed conjecture.
Despite being open for over forty years, the conjecture remains largely elusive. Here, I will outline some of the known results concerning union closed families. We denote the cardinality of the universe by m and the number of member sets by n. The following are some scenarios where a separating family satisfies the Union Closed Conjecture:
* When m ≤ 12. <cit.>
* When n ≤ 50. <cit.>
* When n ≤ 2m.<cit.>
* When n ≥2/32^m.<cit.>
* When the family includes either a singleton set or a set of cardinality.<cit.>
* When all the member sets of the family are having a cardinality at least m/2.<cit.>
§.§ The graph formulation
Using some notions of graph theory, Henning Bruhn<cit.> proposed a graph formulation to the union closed conjecture in 2013. In this section, we will go through the crux of the interpretation. A vertex subset of a graph is said to be a stable set if any pair of its elements are non- adjacent. Introducing the concept of maximality in stable sets, we define a stable set as maximal if no additional vertex can be added to it without violating the condition of being a stable set. For a graph G, V(G) represents the set of vertices. We use the notation N(P) [We use N_ G(P) specifically if more than one graphs are present] to represent the set of neighbors of a vertex subset P⊆ V(G). When P is a singleton, denoted as P={x}, we use N(x) instead of N({x}).
<cit.>
For any bipartite graph with at least one edge, there exists a vertex in each of its bipartite classes that lie in at most half of the maximal stable sets.
<cit.>
Conjecture <ref> is equivalent to Frankl's conjecture.
The conjecture <ref> mentioned above represents the graph formulation of Frankl's conjecture. Considering its equivalence with Frankl's conjecture, a vertex v in graph G is deemed rare if it appears in at most half of the maximal stable sets. We define a bipartite graph to satisfy Frankl's conjecture if Conjecture <ref> holds for it.
Currently, bipartite graph classes such as chordal bipartite, subcubic, and series-parallel graphs are known to satisfy Frankl's Conjecture <cit.>.
§ CONNECTING THE TWO FORMULATIONS.
In the previous section, we extensively discussed both the graph formulation and the set formulation of the Union Closed Conjecture, along with significant results pertaining to each. Now, we aim to establish connections between these two formulations and present equivalent results derived from one to the other, and vice versa.
Let us consider a family of sets 𝒮 with the universe U(𝒮). The union closed family generated by 𝒮, denoted by ⟨𝒮⟩, comprises the collection of all unions of its member sets, including the empty set ϕ. We define the incidence graph of 𝒮 as a bipartite graph G with vertex set V(G) = 𝒮∪ U(𝒮) and edge set E(G) = {Sx : S ∈𝒮, x ∈ U(𝒮), x ∈ S }. Let G be a bipartite graph with bipartite classes X and Y. By the incidence family of X, we refer to the family ℱ^ X = {N(y) : y ∈ Y}. If there are no isolated vertices[As isolated vertices does not affect the conjecture, we can just exclude them.] in X, then it's universe U(ℱ^ X)=X. Throughout this section, we will denote the two vertex classes of the bipartite graph G as X and Y.
Let G be a bipartite graph with vertex partition X∪ Y. A vertex x∈ X is rare if and only if it is abundant in ⟨ℱ^ X⟩.
Consider any subset of vertices from Y, denoted as Y^'. It can be observed that there exists a unique maximal stable set in G, denoted as S, such that S∩ X=X∖ N(Y^'). Furthermore, every maximal stable set S satisfies S∩ X=X∖ N(Y^') for some Y^'⊆ Y. It is important to note that no two maximal stable sets S_ 1 and S_ 2 can have the same intersection with X, i.e., S_ 1∩ X ≠ S_ 2∩ X. Therefore, if a vertex x∈ X is rare in G, then it is also rare in the family {X∖ (N(Y^')) | Y^'⊆ Y}. Consequently, x is abundant in the family { (N(Y^')) | Y^'⊆ Y}. Now, let F={N(y)| y∈ Y^'}, and observe that the universe U(F) is nothing but N(Y^'). It is also evident that { (N(Y^')) | Y^'⊆ Y}={U(F)|F⊆ℱ^ X}=⟨ℱ^ X⟩. The converse direction follows similarly from the above steps.
In accordance with the definition of separating families presented in the preceding section, let us introduce the concept of a twin-free graph as a graph devoid of any pair of vertices sharing an identical set of neighbors. If there are two vertices in a graph that have the exact same neighbors, combining them into one vertex would not alter the count of maximal stable sets or the potential frequencies within the graph. Hence, our focus remains solely on verifying the conjecture for all twin-free bipartite graphs. It is pertinent to note that this section exclusively deals with twin-free graphs with no isolated vertices.
If G is a bipartite graph with one of its two classes containing at most 12 vertices, then there exists a rare vertex within the same class.
Let X denote the bipartite class of G such that |X| ≤ 12. Then, it is evident that |U(⟨ℱ^ X⟩)| = |U(ℱ^ X)| = |X| ≤ 12. It is known that union closed families with at most 12 elements satisfy the conjecture. Therefore, ⟨ℱ^ X⟩ satisfies the conjecture, and consequently, there exists a rare vertex in X by Proposition <ref>.
A bipartite graph with both classes of cardinality at most 12 satisfies Frankl's conjecture.
In a bipartite graph G=(X,Y), if 2^ |Y|≤ 2|X|, then there exists a rare vertex in X.
Consider the union closed family ⟨ℱ^ X⟩. Clearly, the number of member sets in the family, n=|⟨ℱ^ X⟩|≤ 2^0.5|ℱ^0.5X|= 2^0.5|Y|. Also, note that the total number of elements m=U(ℱ^ X)=|X|. This leads us to the relation n≤ 2^|Y|≤ 2|X| = 2m. Since n≤ 2m, ⟨ℱ^ X⟩ satisfies the conjecture. Hence, there is a rare element in X.
If |Y|≥2/32^ |X|, then the graph satisfies Frankl's conjecture.
The number of member sets in ⟨ℱ^ X⟩, denoted by n = |⟨ℱ^ X⟩| ≥ |ℱ^ X| = |Y|, and the number of elements in the universe of ⟨ℱ^ X⟩ is m = |X|. Thus, the relation |Y| ≥2/32^ |X| leads us to n ≥2/32^ m, and hence ⟨ℱ^ X⟩ satisfies the conjecture. This ensures a rare element in X. Now, observe that |Y| ≥2/32^ |X| 2|Y| ≥4/32^ |X|≥ 2^ |X|. By Proposition <ref>, there exists a rare element in Y, and as a result, the graph satisfies the conjecture.
If a bipartite graph G contains a pendant vertex, then its neighbor is rare in G. Additionally, if G contains a vertex of degree 2, then there exists a rare vertex in the other bipartite class.
Let y ∈ Y be the pendant vertex. Then, ⟨ℱ^ X⟩ contains a singleton. Therefore, ⟨ℱ^ X⟩ satisfies the conjecture, and moreover, the neighbor of y is the abundant element of the family. Hence, the neighbor of y is a rare vertex of G. Similarly, when there is a vertex of degree 2 in Y, ⟨ℱ^ X⟩ contains a member set of cardinality 2, and as a result, ⟨ℱ^ X⟩ has an abundant element. Hence, there is a rare vertex in X.
Let ℱ be a family in which every member set has a maximum size of 3, and each element within the sets has a frequency that does not exceed 3. Then ⟨ℱ⟩ satisfies the union closed conjecture.
The proof is straightforward, as the incidence graph of ℱ is subcubic and we know that subcubic graphs satisfy the union closed conjecture.
If ℱ is a family with n member sets, and it contains an element with a frequency of at least n - ⌊log_2n/2⌋, then ⟨ℱ⟩ satisfies the union closed conjecture, and that element is abundant.
Let's consider an element a in ℱ with a frequency of at least n - ⌊log_2n/2⌋. Among the sets in ℱ, there are at most ⌊log_2n/2⌋ that don't contain a. We can observe that the maximum number of sets, including the empty set, in ⟨ℱ⟩ that don't include a is determined by the power set of ⌊log_2n/2⌋ elements. Essentially, we can consider any combination of these sets. Consequently, there are at most 2^⌊log_2n/2⌋ = ⌊n/2⌋ sets in ⟨ℱ⟩ that exclude a. Since ⌊n/2⌋≤|⟨ℱ⟩|/2, this observation leads to the conclusion that a is present in at least half of the sets in ⟨ℱ⟩.
Let G be a bipartite graph with bipartite classes X and Y. If there exists a vertex a ∈ X such that |N(a)| ≥ |Y| - ⌊log_2|Y|/2⌋, then a is a rare vertex.
The proof follows straightforwardly from Proposition <ref> in conjunction with Proposition <ref>.
Let G be a bipartite graph with bipartite classes X and Y. If deg(a)≥|X|/2 for every vertex a∈ Y, then there is a rare vertex in X.
It can be observed that the cardinality of the member sets of ℱ^ X is greater than |X|/2. Consequently, the cardinality of the member sets of ⟨ℱ^ X⟩ is also greater than |X|/2. Thus, ⟨ℱ^ X⟩ satisfies the conjecture, and as a result, there is a rare vertex in X.
Let G be a bipartite graph with a minimum degree of δ. If both bipartite classes have cardinalities at most 2δ, then G satisfies Frankl's conjecture.
An r-regular bipartite graph with both of its bipartite classes having cardinality at most 2r satisfies Frankl's conjecture.
Any bipartite graph with no pendant vertex satisfy Frankl's conjecture.
Any bipartite graph with exactly one pendant vertex satisfy Frankl's conjecture.
Any bipartite graph with at least one pendant vertex satisfy Frankl's conjecture.
Conjectures <ref>, <ref>, and <ref> are equivalent to each other. Additionally, a proof of any one of these conjectures would imply the validity of the union closed conjecture.
Let's consider a union closed family ℱ. It's known that if ℱ contains a singleton set, it trivially satisfies the union closed conjecture. Now, let's assume ℱ to be a union closed family without singletons.
The incidence graph of ℱ either lacks pendant vertices entirely, or if they exist, they are all adjacent to the same vertex. This is because in the union closed family ℱ, U(ℱ) is a member set, and any element with frequency 1 must belong to that set. Also, note that the conjecture's truth value remains unchanged by adding or removing an element of frequency 1 from the family's universe.
Assuming Conjecture <ref> to be true, if ℱ contains elements of frequency 1, we can remove them to obtain a new family ℱ'. The incidence graph of ℱ' has no pendant vertices. By applying Proposition <ref>, we conclude that ℱ' satisfies the union closed conjecture, and so does the original family ℱ. Similarly, the equivalence of Conjectures <ref> and <ref> can be demonstrated by adding elements of frequency 1 to the respective families.
The Proposition mentioned above, Proposition <ref>, helps to narrow down the specific classes of graphs that need to be considered in order to demonstrate the conjecture. In the subsequent section, our focus will be on graphs that possess pendant vertices positioned in specific locations.
§ SOME NEW GRAPH RESULTS
We will see some interesting results related to the graph formulation of the conjecture. Before that let us go through some definitions and notations we might encounter here.
For a graph G, we denote the set of all maximal stable sets by ℬ_ G. The cardinality of ℬ_ G is denoted by w_ G, which represents the number of maximal stable sets of G. For any vertex x ∈ V(G), the number of maximal stable sets containing x is represented by w_ G(x). We use the notation N_ G(P) to represent the set of neighbors of a vertex subset P in graph G. When P is a singleton, say P = {x}, we will simply use N_ G(x).
If a disconnected graph G has one of its connected components satisfying Frankl's conjecture, then G will also satisfy the conjecture.
As Frankl's conjecture is about finite graphs, we only need to consider finitely many connected components. We will show this for the case of two components, and the rest of the proof follows from basic induction arguments. Suppose G has two components P and Q. Let S_ 1 and S_ 2 be two maximal stable sets of P and Q respectively. One can notice that S_ 1∪ S_ 2 is a maximal stable set of G. For any maximal stable set S of G, observe that S ∩ V(P) and S ∩ V(Q) are maximal stable sets of P and Q respectively. This gives the number of maximal stable sets of G, w_ G = w_ P w_ Q. Using similar arguments, one can obtain that for every x ∈ V(P), w_ G(x) = w_ P(x) w_ Q.
Let r be a rare vertex in P. Then by definition, w_ P - 2w_ P(r) ≥ 0. As a result, w_ G - 2w_ G(r) = w_ P w_ Q - 2w_ P(r) w_ Q≥ 0 and r is rare in G. Thus, whenever P satisfies the conjecture, it guarantees that it holds for G as well.
We will now present a more intriguing result. A decomposition of a graph G is a set of edge-disjoint subgraphs H_1, H_2, …, H_n such that ⋃_ i=1^ n H_i = G. A vertex v ∈ V(G) is defined as 2-layered if every vertex in N_ G(v) has a pendant vertex adjacent to it.
Let G be a bipartite graph and {C,H} a decomposition of G. Suppose the vertices in V(C) ∩ V(H) belong to the same bipartite class of G and are all 2-layered. Then, all rare vertices in C or H will remain rare in G.
Before presenting the proof of the theorem, we need to establish some notations and lemmas. The following notations will be used throughout the proof. In light of <ref>, it suffices to verify the theorem for connected graphs. Consequently, we assume without loss of generality that G is connected.
Let m ∈ V(C) ∩ V(H), and suppose N_ C(m) = ϕ. In this case, we can simply remove the vertex from C to get a new decomposition. Similarly, if N_ H(m) = ∅, we can remove m from H. This allows us to redefine C and H in such a way that every vertex in V(C) ∩ V(H) has neighbors in both C and H. Let 1, 2, …, n be the vertices common to these subgraphs C and H. We define [n] = {i ∈ℕ| i ≤ n}, and accordingly, we represent this common vertex set by [n]. For any subset Θ⊆ [n], we denote its complement by Θ^𝖼 = [n] ∖Θ.
Recall the definitions of w_ G and ℬ_ G. Here, we will slightly generalize these definitions.For any vertex subsets P, Q ⊆ V(G) and any subgraph H of G, the notation ℬ_ G(P, Q) denotes the set of all maximal stable sets of G that include all vertices in P and exclude all vertices in Q. Similarly, ℬ_ H(P, Q) represents the set of all maximal stable sets of H with the same inclusion and exclusion criteria. The cardinality of ℬ_ G(P, Q) and ℬ_ H(P, Q) is denoted by w_ G(P, Q) and w_ H(P, Q). This notation is somewhat informal but helps to reduce the complexity of larger notations.
For any Θ⊆ [n], Γ⊆Θ^𝖼, and b ∈ V(C∖Γ), the following identities hold:
w_ C∖Γ(Θ, Θ^𝖼∪ N_ C(Γ)) = w_ C(Θ∪Γ, Θ^𝖼∖Γ)
w_ C∖Γ({b}∪Θ, Θ^𝖼∪ N_ C(Γ)) = w_ C({b}∪Θ∪Γ, Θ^𝖼∖Γ)
Consider the mapping f: ℬ_ C∖Γ(Θ, Θ^𝖼∪ N_ C(Γ)) →ℬ_ C(Θ∪Γ, Θ^𝖼∖Γ) defined by f(B) = B ∪Γ. Let B ∈ℬ C∖Γ(Θ, Θ^𝖼∪ N C(Γ)), observe that f(B) includes all vertices from Θ∪Γ and excludes all vertices from Θ^𝖼∖Γ. Since B is a stable set in C ∖Γ and contains no vertices from N_ C(Γ), it follows that f(B) is stable in C. Assume, for contradiction, that f(B) is not a maximal stable set in C. Then, there exists a vertex v ∈ V(C) such that v ∉ f(B) and B ∪Γ∪{v} is stable in C. This implies that B ∪{v} is stable in C ∖Γ, contradicting the maximality of B in C ∖Γ. Hence, f(B) ∈ℬ_ C(Θ∪Γ, Θ^𝖼∖Γ).
Next, consider a set D ∈ℬ_ C(Θ∪Γ, Θ^𝖼∖Γ). We define the inverse image of D under the mapping f as f^-1(D) = D ∖Γ. Since D necessarily includes all vertices from Γ, it excludes any vertices from N_ C(Γ). Therefore, f^-1(D) is clearly a subset of V(C ∖Γ) that contains all vertices in Θ, while simultaneously avoiding all vertices in Θ^𝖼∖Γ and N_ C(Γ).
Also note that D is a stable set, and since f^-1(D) is a subset of D, it is also a stable set. Suppose, for contradiction, that f^-1(D) is not maximal stable in C ∖Γ. This implies there exists a vertex v ∈ V(C ∖Γ), with v ∉ f^-1(D), such that the set {v}∪ f^-1(D) is stable in C ∖Γ. Since all vertices in Γ are 2-layered, each vertex in N_ C(Γ) has an adjacent pendant vertex. Due to the maximality of the stable sets, these pendant vertices must be included in all member sets of ℬ_ C(Θ∪Γ, Θ^𝖼∖Γ). Consequently, they are present in every member set of f^-1(ℬ_ C(Θ∪Γ, Θ^𝖼∖Γ)). Thus, v ∉ N_ C(Γ). Therefore, {v}∪ D = {v}∪ f^-1(D) ∪Γ would be stable in C, which contradicts the assumption that D is a maximal stable set in C. Hence, f^-1(D) ∈ℬ_ C ∖Γ(Θ, Θ^𝖼∪ N_ C(Γ)) and the mapping is well-defined.
I
It is trivial that f is a bijective map. The one-to-one correspondence between the families ℬ_ C ∖Γ(Θ, Θ^𝖼∪ N_ C(Γ)) and ℬ_ C(Θ∪Γ, Θ^𝖼∖Γ) establishes Equation <ref>. Similarly, Equation <ref> follows from analogous arguments, as it involves collecting all maximal stable sets containing the specified vertex b ∈ V(C ∖Γ) from both families.
For any Θ⊆[n], Γ_ 1,Γ_ 2⊆Θ^𝖼 and b∈ V(C), if Γ_ 1≠Γ_ 2 then,
ℬ_ C∖Γ_ 1(Θ,Θ^𝖼∪ N_ C( Γ_ 1))
∩ℬ_ C∖Γ_ 2(Θ,Θ^𝖼∪ N_ C( Γ_ 2))=ϕ
ℬ_ C∖Γ_ 1({b}∪Θ,Θ^𝖼∪ N_ C( Γ_ 1))
∩ℬ_ C∖Γ_ 2({b}∪Θ,Θ^𝖼∪ N_ C( Γ_ 2))=ϕ
Consider two distinct subsets Γ_ 1 and Γ_ 2 of Θ^𝖼. Without loss of generality, assume that there exists an element m ∈Γ_ 1 such that m ∉Γ_ 2. Suppose B is an element of both ℬ_ C ∖Γ_ 1(Θ, Θ^𝖼∪ N_ C(Γ_ 1)) and ℬ_ C ∖Γ_ 2(Θ, Θ^𝖼∪ N_ C(Γ_ 2)). Since m ∈Γ_ 1, the maximal stable sets in ℬ_ C ∖Γ_ 1(Θ, Θ^𝖼∪ N_ C(Γ_ 1)) avoid all vertices in N_ C(m). Therefore, B ∩ N_ C(m) = ∅. On the other hand, since B is also an element of ℬ_ C ∖Γ_ 2(Θ, Θ^𝖼∪ N_ C(Γ_ 2)), it must be a maximal stable set in C ∖Γ_ 2 that excludes vertices from Θ^𝖼. Given that m ∈Θ^𝖼, the maximality of B implies that B ∩ N_ C(m) ≠∅.
This contradiction shows that ℬ_ C ∖Γ_ 1(Θ, Θ^𝖼∪ N_ C(Γ_ 1)) ∩ℬ_ C ∖Γ_ 2(Θ, Θ^𝖼∪ N_ C(Γ_ 2)) must be empty. Furthermore, it follows that ℬ_ C ∖Γ({b}∪Θ, Θ^𝖼∪ N_ C(Γ)) ⊆ℬ_ C ∖Γ(Θ, Θ^𝖼∪ N_ C(Γ)). Consequently, we deduce that ℬ_ C ∖Γ_ 1({b}∪Θ, Θ^𝖼∪ N_ C(Γ_ 1)) ∩ℬ_ C ∖Γ_ 2({b}∪Θ, Θ^𝖼∪ N_ C(Γ_ 2)) = ∅.
Let b∈ V(C∖ [n]) and a∈ [n], then :
w_ G=∑_Θ⊆[n]∑_Γ⊆Θ^𝖼{ w_ C∖Γ(Θ,Θ^𝖼∪ N_ C(Γ))
∑_Ψ⊆Θ^𝖼∖Γ
w_ H∖Ψ(Θ,Θ^𝖼∪ N_ H(Ψ))}
w_ G(b)=∑_Θ⊆[n]∑_Γ⊆Θ^𝖼{ w_ C∖Γ({b}∪Θ,Θ^𝖼∪ N_ C(Γ))
∑_Ψ⊆Θ^𝖼∖Γ
w_ H∖Ψ(Θ,Θ^𝖼∪ N_ H(Ψ))}
w_ G(a)=∑_Θ⊆[n]∑_Γ⊆Θ^𝖼{ w_ C∖Γ({a}∪Θ,Θ^𝖼∪ N_ C(Γ))
∑_Ψ⊆Θ^𝖼∖Γ
w_ H∖Ψ({a}∪Θ,Θ^𝖼∪ N_ H(Ψ))}
We will prove the equation <ref> by showing that,
ℬ_ G ={B_1∪ B_2 |[ B_1∈ℬ_ C∖Γ(Θ,Θ^𝖼∪ N_ C(Γ)), B_2∈ℬ_ H∖Ψ(Θ,Θ^𝖼∪ N_ C(Ψ)); Θ⊆ [n],Γ⊆Θ^𝖼,Ψ⊆Θ^𝖼∖Γ ]}.
For simplicity, let us denote the family on the right-hand side by 𝒟. Consider any member set B_1∪ B_2 in 𝒟. It is straightforward to observe that B_1∪ B_2 forms a stable set in G. If this set is not maximal in G, then there must exist a vertex v ∈ V(G) such that {v}∪ B_1∪ B_2 is also a stable set. However, due to the maximality of B_1 and B_2 in the subgraphs C ∖Γ and H ∖Ψ, respectively, the vertex v cannot belong to either of these subgraphs. Since C ∖Γ∪ H ∖Ψ = G, there is no such vertex v in the graph G. Therefore, B_1∪ B_2 must be a maximal stable set in G, implying that B_1∪ B_2∈ℬ_ G and hence 𝒟⊆ℬ_ G.
Now, consider any B ∈ℬ_ G. Define the subsets Θ⊆ [n] and Θ^𝖼 = [n] ∖Θ by setting Θ = B ∩ [n]. Next, define Γ, Ψ⊆Θ^𝖼 as the sets of vertices in [n] such that N_ C(Γ) ∩ B = ∅ and N_ H(Ψ) ∩ B = ∅, respectively. It is evident that Γ∩Ψ = ∅, because if there were a vertex v ∈Θ^𝖼 with N_ G(v) = N_ C(v) ∪ N_ H(v) = ∅, this would contradict the maximality of B as a stable set[Since v and none of its neighbors would be included in B]. Furthermore, B ∩ (C ∖Γ) contains all vertices in Θ but excludes vertices from both Θ^𝖼 and N_ C(Γ). Notably, this set is stable since it is a subset of B.
Assume that B is not maximal in C ∖Γ, and there exists a vertex v ∈ V(C ∖Γ) such that v ∉ B and ({v}∪ B) ∩ (C ∖Γ) is a stable set. For each u ∈Θ^𝖼∖Γ, by definition, N_ C(u) ∩ B ≠∅, which implies that v cannot belong to Θ^𝖼∖Γ. Consequently, v ∉ [n]. Note that v also does not belong to N_ C(Θ), as ({v}∪ B) ∩ (C ∖Γ) remains stable. From this, one can observe that ({v}∪ B) ∩ (H ∖Ψ) is also stable, and consequently, B ∪{v} = (({v}∪ B) ∩ (C ∖Γ)) ∪ (({v}∪ B) ∩ (H ∖Ψ)) forms a stable set in G, which contradicts the maximality of B. Therefore, B ∩ (C ∖Γ) ∈ℬ_ C∖Γ(Θ, Θ^𝖼∪ N_ C(Γ)). Similarly, one can show that B ∩ (H ∖Ψ) ∈ℬ_ H∖Ψ(Θ, Θ^𝖼∪ N_ C(Ψ)), and hence B ∈𝒟. This implies that ℬ_ G⊆𝒟.
Given that ℬ_ G = 𝒟, and with the application of Lemma <ref>, we derive Equation <ref>. The proof of Equation <ref> follows by identifying the member sets of both families that include b as an element. Similarly, Equation <ref> can be established using analogous reasoning.
We will determine the number of maximal stable sets using Lemma <ref>. First, let's apply Lemma <ref> to Equation <ref>.
w_ G = ∑_Θ⊆[n]∑_Γ⊆Θ^𝖼{ w_ C∖Γ(Θ,Θ^𝖼∪ N_ C(Γ))
∑_Ψ⊆Θ^𝖼∖Γ
w_ H∖Ψ(Θ,Θ^𝖼∪ N_ H(Ψ))}
= ∑_Θ⊆[n]∑_Γ⊆Θ^𝖼{ w_ C(Θ∪Γ,Θ^𝖼∖Γ)
∑_Ψ⊆Θ^𝖼∖Γ
w_ H(Θ∪Ψ,Θ^𝖼∖Ψ)}
`
Notice that,
∑_Ψ⊆Θ^𝖼∖Γ
w_ H(Θ∪Ψ,Θ^𝖼∖Ψ) = ∑_Ψ⊆Θ^𝖼∖Γ
w_ H(Θ∪Ψ,Γ∪ ((Θ^𝖼∖Γ)∖Ψ))
= w_ H(Θ,Γ)
Observe that this identity holds because we are summing over all possible subsets Ψ⊆Θ^𝖼∖Γ. Next, we introduce a new variable Ω = Θ∪Γ, with Ω^𝖼 = [n] ∖Ω = Θ^𝖼∖Γ. This allows us to express w_ C(Θ∪Γ, Θ^𝖼∖Γ) as w_ C(Ω, Ω^𝖼). We can now apply this change of variables after substituting (<ref>) into equation (<ref>).
w_ G = ∑_Θ⊆[n]∑_Γ⊆Θ^𝖼{ w_ C(Θ∪Γ,Θ^𝖼∖Γ)
w_ H(Θ,Γ)}[Substituting equation <ref>]
= ∑_Ω⊆[n]∑_Θ⊆Ω{ w_ C(Ω,Ω^𝖼)
w_ H(Θ,Ω∖Θ)}[Change of variables]
= ∑_Ω⊆[n]{ w_ C(Ω,Ω^𝖼) ∑_Θ⊆Ω
w_ H(Θ,Ω∖Θ)}
= ∑_Ω⊆[n]{ w_ C(Ω,Ω^𝖼) w_ H}
= w_ C w_ H
In a similar manner, it can be deduced that w_ G(b) = w_ C(b)w_ H for any b ∈ C ∖ [n]. When a ∈ [n], for every Γ with a ∈Γ, we have w_ C ∖Γ({a}∪Θ, Θ^𝖼∪ N_ C(Γ)) = 0. Using this fact and after similar calculations, one can find that w_ G(a) ≤ w_ C(a)w_ H(a). Let r ∈ C be a rare vertex in C. If r ∈ C ∖ [n], we have w_ G-2w_ G(r)=w_ Cw_ H-2w_ C(r)w_ H=(w_ C-2w_ C(r))w_ H≥ 0. Now, if r ∈ [n], then w_ G-2w_ G(r)≥ w_ Cw_ H-2w_ C(r)w_ H(r)≥(w_ C-2w_ C(r))w_ H≥ 0. Therefore, every rare vertex in C is also rare in G.
The theorem's implications extend to solving the conjecture for previously unexplored graph classes, opening up new avenues of research. Now, let's discuss some of the notable consequences stemming from this result. All results concerning the set version of the union closed conjecture presented below are directly derived from the corresponding graph results by applying Proposition <ref> within the respective graph results.
Let ℱ_ 1 and ℱ_ 2 be two families of sets that do not share any common sets. Let A denote the set of elements that are common to both families, i.e., A = U(ℱ_ 1) ∩ U(ℱ_ 2). Assume that every S ∈ℱ_ 1∪ℱ_ 2 with S ∩ A ≠∅ contains an element of frequency 1. If ⟨ℱ_ 1⟩ or ⟨ℱ_ 2⟩ satisfies the union closed conjecture, then ⟨ℱ_ 1∪ℱ_ 2⟩ also satisfies the conjecture.
Let G be a bipartite graph and v ∈ V(G) be such that there exists at least one pendent vertex adjacent to v and all the non-pendent neighbors of v are 2-layered. Then G satisfies Frankl's conjecture. Furthermore, the vertex v and all of its neighbors are rare in G.
Define P = {u | u ∈ N_ G(v), d(u) = 1}, where d(u) denotes the degree of vertex u. Let C be the subgraph induced by vertex v, while H is the subgraph induced by the vertex subset V(G) ∖ (P ∪{v}). Clearly, V(H) ∩ V(C) = N_ G(v) ∖ P, which corresponds to the non-pendant neighbors of v.
Observe that all vertices in N(v) ∖ P are 2-layered vertices, and {C, H} forms a decomposition of G. It is important to note that C is a star graph, implying that all of its vertices are rare within C. Consequently, by applying Theorem <ref>, we can conclude that both vertex v and its neighbors in G are rare in G, thereby establishing that G satisfies the conjecture.
Let ℱ be a family of sets. Consider a member set A ∈ℱ that contains at least one element with frequency one. Suppose that for every set S ∈ℱ, where S ∩ A ≠∅, there exists an element within S that has frequency one. Under these conditions, the family ⟨ℱ⟩ satisfies the union closed conjecture, with all elements of A being abundant.
Let G be a bipartite graph with no isolated vertices, in which every vertex in one of its bipartite classes is adjacent to at least one pendant vertex. Then, G satisfies Frankl's conjecture. In fact, all the vertices of G are rare.
Let ℱ be a family of sets where each of it's member sets containing an element of frequency 1. Then ⟨ℱ⟩ satisfies the union closed conjecture.
Suppose we have a collection of bipartite graphs G_ 1, G_ 2, …, G_ k, where no two graphs share any common vertices. Additionally, let v_ 1, v_ 2, …, v_ k denote the 2-layered vertices within G_ 1, G_ 2, …, G_ k, respectively. By merging these k vertices, we obtain a new graph G. Under this construction, any vertex that is rare in one of the original graphs will remain rare in the composite graph G.
Let v be the vertex in G resulting from the merging of all k vertices. It is evident that v is a 2-layered vertex, allowing us to apply Theorem <ref>.
Let ℱ_ 1, ℱ_ 2, …, ℱ_ k be a collection of families of sets such that their universes are mutually exclusive. From each family ℱ_ i, choose an element a_ i such that every member set containing a_ i includes an element with frequency 1. Now, set a = a_ 1 = a_ 2 = … = a_ k and define ℱ = ⋃_i=1^kℱ_ i. Under this construction, all abundant elements in any of the ⟨ℱ_ i⟩ remain abundant in ⟨ℱ⟩.
§ ACKNOWLEDGMENTS
I would like to thank my adviser Rogers Mathew for guiding me through the right track.
abbrv
|
http://arxiv.org/abs/2409.02984v1 | 20240904180000 | Quantum circuits based on topological pumping in optical lattices | [
"Zijie Zhu",
"Yann Kiefer",
"Samuel Jele",
"Marius Gächter",
"Giacomo Bisson",
"Konrad Viebahn",
"Tilman Esslinger"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall",
"cond-mat.quant-gas",
"cond-mat.str-el",
"physics.atom-ph"
] |
[][email protected]
Institute for Quantum Electronics & Quantum Center, ETH Zurich, 8093 Zurich, Switzerland
§ ABSTRACT
Gate operations composed in quantum circuits form the basis of digital quantum simulation <cit.> and quantum processing <cit.>.
While two-qubit gates generally operate between nearest neighbours, many circuits require non-local connectivity, necessitating some form of quantum information transport, such as the repeated application of swap gates <cit.> or qubit shuttling <cit.>.
Preserving motional coherence during such transport remains a key challenge to improve gate fidelity <cit.> and qubit connectivity, as well as to connect local fermionic modes <cit.>.
Here we combine tuneable gate operations between fermionic potassium-40 atoms – based on superexchange interaction – with their bidirectional transport via topological Thouless pumping in an optical lattice <cit.>.
We demonstrate shuttling of atomic singlet pairs with a single-shift fidelity of 99.57(4)% over 50 lattice sites.
We spatially and coherently split a large number of randomly distributed fermionic spin singlet pairs and show (swap)^α-gate operations between atoms encountering each other during transport.
As a signature of entanglement between fermions separated over large distances and interwoven with each other, we observe multi-frequency singlet-triplet oscillations.
Topological pumping is generally applicable to long-lived atomic and molecular states, and specifically overcomes lifetime limitations inherent to transport using state-dependent optical lattices <cit.>.
Our work opens up new avenues for transport of quantum information and offers unprecedented possibilities for engineering connectivity in quantum circuits, including approaches based on fermionic modes <cit.>, as well as for atom interferometry <cit.>.
Quantum circuits based on topological pumping in optical lattices
Tilman Esslinger
Received May 17, 2024; accepted July 29, 2024
====================================================================
Optical lattices are a unique tool to prepare and position a large number of atoms in well-defined internal and motional states <cit.>, including the control over quantum tunnelling between lattice sites.
A crucial step towards employing optical lattices as a framework for quantum processing consists in transporting atoms between distant lattice sites <cit.>, leading to graph connectivity <cit.>.
However, lattice-based atom shuttling, either via state-dependent light shifts <cit.>, or via additional scanning tweezers <cit.>, has remained experimentally challenging, due to motional heating or atom loss.
Advances in the three areas of i) fast generation of degenerate quantum gases <cit.>, ii) dipolar and molecular interaction mechanisms <cit.>, and iii) novel concepts utilising fermionic modes for quantum computing <cit.>, have further increased the urgency to find robust methods to transport quantum information in optical lattices.
In this Letter we introduce bidirectional topological Thouless pumping <cit.> in combination with controlled superexchange interactions <cit.> as a coherent toolbox to transport, separate, interweave, and recombine atomic quantum states in an optical lattice.
The dynamic lattice potential creates an array of independent one-dimensional tubes along x, each realising a Thouless pump along its axis providing robust and state-independent adiabatic transport <cit.>, applicable to any type of polarisable quantum particles.
The lattice is characterized by tunnelling parameters t_x and t_x' on alternating bonds, and alternating lattice sites exhibit an offset energy ±Δ.
Thouless pumping is achieved via a periodic modulation of the lattice potential, cycling through the staggered (Δ=Δ_0, t_x=t_x') and the dimerised (Δ=0, t_x≠ t_x') configurations, which adiabatically shuttles the atoms within the lattice (Methods).
As a source of entanglement, we prepare atomic spin singlet pairs from a quantum degenerate two-component cloud of fermionic potassium-40 atoms inside the dynamical optical lattice.
With the singlet pairs in the lowest band of the topological pump, we unidirectionally transport the pairs across 50 lattice sites with a measured single-shift fidelity of 99.57(4)%.
Conversely, when ramping to strongly repulsive interactions between the atomic states, we adiabatically transfer each singlet pair into a correlated Bell state (|↓,↑⟩-|↑,↓⟩)/√(2) (`Heisenberg singlet'), where |↑,↓⟩ denotes a state where the spin-↑ atom occupies the left site and the spin-↓ atom occupies the right site of a double-well.
When considering the Bloch band basis instead of the Fock basis, exactly one atom occupies the lower and the other occupies the upper band of the topological pump, each experiencing opposite Chern numbers (C = ± 1, Fig. <ref>a).
Topological pumping therefore transports the two correlated components in opposite directions, separating them further with each pumping cycle.
We analyse the entangled two-particle state and its spatial separation in a measurement sequence.
To this end, the pump is stopped after several cycles and a magnetic field gradient is applied in the transport direction for a variable time.
The two spins that form the entangled state, m_F=-9/2 (↓) and -5/2 (↑) [F=9/2], have different magnetic moments.
Therefore, the energy splitting caused by the magnetic gradient increases with spatial separation of the two magnetic moments, leading to oscillations between the singlet and the triplet states (singlet-triplet oscillation <cit.>, STO).
By subsequently reverting the topological pumping directions the two components of the correlated state reconvene at their original position and we determine the singlet fraction of the state using a double occupancy measurement protocol (Methods).
The two counterpropagating modes in the topological pump connect atoms originating from spatially separated lattice sites and encounters act as two-particle beam splitters with the coupling being adjustable via superexchange interaction between atoms of different spin.
In the language of quantum information, these beam splitters represent partial swap gates (swap)^α, acting on two qubits in the {↑,↓} basis, where α is fully tuneable.
For all values of α∉ℕ, (swap)^α gates are entangling and, when combined with single-qubit rotations <cit.>, would allow the construction of a universal gate set for quantum computation <cit.>.
For α = 1, the atoms pass effectively through each other and the entanglement between pairs can be rearranged, giving rise to spatially interwoven correlation patterns (Fig. <ref>b).
The reversal protocol described above demonstrates that the correlations remain intact through the circuit.
For α = 2, atoms reflect off each other, giving rise to quantum states that exhibit multi-frequency STOs.
The combination of parallel layers of gate operations constitutes the assembly of quantum circuits using neutral atoms in optical lattices (Fig. <ref>b).
The indistinguishable nature of the fermionic constituents enables two complimentary (but equivalent) interpretations of the action of two-particle gates.
In the first interprepration, operations act on the spin-{↑,↓} degrees of freedom and a swap gate can be understood as two atoms swapping their spin states.
Strongly repulsive interactions prevent atoms from passing through each other, allowing individual atoms to be labelled by their order, which could be used to construct a quantum circuit representation based on effectively distinguishable qubits.
In the second, or `motional', interpretation a swap operation exchanges the atoms' positions and atoms can become delocalised in the lattice through actions such as √( swap), which is relevant for fermionic quantum computing <cit.>.
In a first experiment, we benchmark shuttle and gate operations. We assess the fidelity of shuttling by transporting paired atoms with opposite spins (Fig. <ref>).
Strong attractive interactions during the loading stage ensures that between 60% and 75% of the total number of 5.3(2)×10^4 atoms start off in doubly occupied unit cells.
In the weakly interacting regime (Hubbard U<2Δ_0) paired atoms can be shuttled together by one lattice site every half pump period, which we define as one operation cycle.
This is confirmed by in-situ measurements of the cloud displacement (Fig. <ref>b).
During the shuttling process, double occupancy—i.e., two atoms occupying the same site and orbital—varies in time since the atoms become alternatingly distributed over two sites and localised on one site (Fig. <ref>a).
In particular, the double occupancy reaches its maximum in the staggered lattice configuration (Δ=±Δ_0) and its minimum in the dimerised configuration (Δ=0), as shown in Fig. <ref>c.
Since the spectroscopic measurement of double occupancy is orbital-selective (Methods), this provides a microscopic observable to evaluate the fidelity of the shuttling operation.
A decrease in double occupancy indicates that an atom from a pair has been excited to higher bands, left behind, or has tunnelled in transverse directions.
The experimental sequence for Fig. <ref>c involves pumping forward and then returning to the original position by reversing the pump direction, ensuring consistent detection efficiency in the centre of the trap.
Exponential fits of the double occupancy in the staggered and dimerised lattice configurations yield an fidelity of F=0.9957(4) for a single shuttle operation by one lattice site (Methods).
Interparticle interactions allow the programmable application of gate operations when two atoms moving in opposite directions enter the same double well (Fig. <ref>d).
Specifically, the regime of strong repulsive Hubbard interactions between spins (Hubbard U ≫ t_x, t_x', Δ) gives rise to a `superexchange' process, characterised by J_ex = 4t_x^2/[U(1-(2Δ/U)^2)] <cit.> (Methods).
In our implementation, the superexchange coupling J_ex(τ) is negligible in the staggered configuration (-T/4 and T/4) due to small tunnelling t_x and finite Δ.
It increases to its maximum in the dimerised configuration (0T) and then decreases again (Fig. <ref>), resulting in discrete gate operations, which last half a pump period, equivalent to one operation cycle.
By changing the control parameters between cycles, we can build gate sequences over time.
The superexchange gate operations can be represented on a two-particle Bloch sphere, with the `Heisenberg' singlet state |s⟩=(|↓,↑⟩-|↑,↓⟩)/√(2) and the triplet state |t⟩=(|↓,↑⟩+|↑,↓⟩)/√(2) at the poles.
The equatorial states are the two product states (|↓,↑⟩, |↑,↓⟩), as well as |i_-⟩=(|↓,↑⟩-i|↑,↓⟩)/√(2) and |i_+⟩=(|↓,↑⟩+i|↑,↓⟩)/√(2) (Fig. <ref>e).
The fermionic superexchange Hamiltonian Ĥ_ex=J_exσ̂_z/2 induces a rotation around the z-axis by an angle φ=1/ħ∫_-T/4^T/4J_ex(τ) dτ, where σ̂_z is the third Pauli matrix.
Precise control of φ enables the realisation of the entire family of partial swap gates, such as √( swap) (φ=π/2), swap (φ=π), √( swap)^† (φ=3π/2), and ( swap)^2 (φ=2π) gates (see also refs. <cit.>).
Here we have simplified the gate action by considering only two opposite spins.
However, since all three triplet states are degenerate, the realisation of the partial swap gates remains valid in the full two-particle Hilbert space, including |↓,↓⟩ and |↑,↑⟩ (Methods).
In order to calibrate the gate operations and determine the rotation angle φ, we first prepare atom pairs in the product state |↑,↓⟩. Next, we pump for one half period, realising one gate operation as shown in Fig. <ref>d (-T/4 to T/4), during which the state evolves on the equator of the two-particle Bloch sphere. Finally, we determine φ by measuring the projection of the resulting state along the y-axis of the Bloch sphere (Methods).
We present two different methods to engineer the gate operations. In Fig. <ref>f, we vary the pump period T to control the interaction duration.
In Fig. <ref>g, we realise different gates by adjusting the lattice depth V_X and thus the superexchange coupling J_ex.
The insets indicate the parameters used for implementing the respective gates, determined from the fitted curves shown in solid red lines.
With full control of shuttle and gate operations, we are able to implement quantum circuits. The first circuit we realise consists solely of swap gates. It separates the initially prepared singlet pairs and swaps the spin state whenever two atoms meet.
This swap process can also be envisioned as two atoms exchanging positions, effectively passing through each other due to the indistinguishability of the fermions employed.
This is particularly useful for rearrangement purposes, especially given the strongly repulsive interactions between the fermions used here.
As a result, the output state consists of interwoven singlet pairs, each separated by s=2N_cyc+1 lattice sites, where N_cyc is the number of operation cycles. In Fig. <ref>a, we show an exemplary circuit for N_cyc=3, outputting singlet pairs separated by 7 sites.
To measure the separation between two entangled atoms from an initial singlet pair, we apply a magnetic gradient Δ B for a specified time τ_STO.
This produces an energy offset Δ_↑↓∝Δ B × s between |↓,↑⟩ and |↑,↓⟩ states, corresponding to the Hamiltonian Ĥ_STO=Δ _↑↓σ̂_x/2 on the two-particle Bloch sphere (Fig. <ref>e), where σ̂_x is the first Pauli matrix.
This leads to an oscillation between the singlet and triplet states at a frequency of Δ_↑↓/h, which is proportional to the separation s (Fig. <ref>b).
After the STO, we reverse the pump to bring the entangled pairs back together for detection.
This sequence can be viewed as a many-particle interferometer.
In Fig. <ref>c, we present singlet-triplet oscillations for increasing numbers of operation cycles ranging from N_cyc=0 to N_cyc=5, corresponding to increasing atom separations s. We determine the oscillation frequency by fitting a sinusoidal curve to each time trace.
The results are shown in Fig. <ref>d, along with additional data up to s=19 sites, whose corresponding time traces are plotted in Fig. <ref>.
All measurements in Fig. <ref>c and Fig. <ref> are performed under the same gradient Δ B, whereas a reduction in contrast as function of s is attributed to fluctuating lattice depths and resulting deviations from perfect swap-operations.
A proportional fit of the data yields a slope of f_1=216.5(6)Hz, which gives the STO base frequency corresponding to entangled pairs on adjacent lattice sites. Pairs separated by s sites thus exhibit STOs at a frequency of s× f_1. We then map the time trace to the distribution of separations s by calculating the Fourier spectrum of the time trace.
Given that s must be an integer in lattice systems, we apply a multi-frequency sinusoidal fit F_singlet(τ)=Σ_sA_ssin(2π s f_1τ+θ_s), incorporating a global damping and offset.
The amplitude A_s is then proportional to the fraction of singlet pairs separated by a certain distance s.
In Fig. <ref>e, we show the amplitude A_s as a function of s from 1 to 12 lattice sites for N_cyc=0, 2, 3, 5. The dominant peaks at s=1, 5, 7, and 11 sites demonstrate the ability to coherently split fermionic singlet pairs by programmable distances.
We now turn to more complex quantum circuits that involve combinations of swap and ( swap)^2 gates. A ( swap)^2 gate does not alter the spin states when two atoms meet and can thus be considered a reflection. Incorporating ( swap)^2 processes into the circuit results in more intricate STO waveforms, indicating a non-trivial distribution of singlet pairs.
In Fig. <ref>, we present the measured STOs of three different circuits with five gate operation cycles each (N_cyc=5).
In the first example, we insert a single parallel ( swap)^2 operation at the second position of an otherwise purely swap sequence, leading to the emergence of multiple frequency components in the STO signal (Fig. <ref>a). We analyse the time trace analogously to Fig. <ref>e, using an independently calibrated base frequency of f_1'=218(1)Hz.
The resulting amplitudes A_s, representing the proportion of singlet pairs separated by s lattice sites, are shown in Fig. <ref>d.
We also apply a fast Fourier transform to the time trace (Fig. <ref>), which agrees with the fit.
We observe a significant contribution at s=4, in addition to s=2×5+1=11 lattice sites.
This can be explained by considering that during the initial operation cycle, two atoms originating from a singlet pair are separated by 2×1+1=3 lattice sites.
In the second operation cycle, one atom can be reflected while the other continues moving, resulting in a separation of 3+1=4 sites.
From then on, the two atoms shift in the same direction and their separation remains constant.
If both atoms originating from a singlet pair are reflected in the second operation cycle, they will be shuttled in opposite directions during the next three cycles, resulting in a separation of | 3-2×3 |=3 lattice sites.
This scenario is considerably less probable, in agreement with the low measured value at s=3.
Both scenarios can be identified with trajectories in the circuit diagram shown in Fig. <ref>g.
The occurrence count of specific distances in the schematic (output) does not directly reflect the amplitudes A_s since the input state of the circuit is randomly initialised for every experimental realisation.
While STOs have been also observed on neighbouring lattice sites with SU(N) fermions <cit.>, in our experiment the occurrence of multiple frequencies is a direct consequence of the spatial distribution of singlet pairs.
Changing the gate composition of quantum circuits leads to additional contributions to the multi-frequency STOs.
For instance, Fig. <ref>b shows the time trace for a situation with two parallel ( swap)^2 gates, at the third and fourth position in the sequence.
The process involving a single reflection at N_cyc=3 or 4 results in two major contributions to the STO signal, corresponding to s=6 and s=8 lattice sites.
The process involving multiple reflections gives rise to signals at s=1, 3, 4, etc.
In addition, we implement a quantum circuit consisting solely of ( swap)^2 gates.
In this configuration, neighbouring singlets are constraining each other, generally resulting in small separations.
The time trace (Fig. <ref>c) and the fitted amplitudes A_s (Fig. <ref>f) confirm that the multi-frequency STO is dominated by low-frequency components, corresponding to small values of s.
The remaining fraction of singlets separated by s=11 lattice sites can be attributed to low-density regions near the edges of the system and imperfections in the gate operations.
The striking revival feature in the all-( swap)^2 time trace (Fig. <ref>c) shows the harmonicity of the frequencies and their phase coherence.
Considering that the signal is contributed by more than ten thousand individual entangled pairs, such phase coherence highlights the potential of topological pumping for scaling up quantum circuits.
In conclusion, we have experimentally achieved high-fidelity shuttle operations of indistinguishable fermionic atoms using topological pumping in an optical lattice.
Both motional ground-state coherence and the entanglement in the spin sector remain preserved during pumping, enabling us to transport entangled atoms together or separate them by tens of lattice sites. Utilising superexchange interactions, we further realise two-qubit (swap)^α gates, facilitating the construction of programmable quantum circuits.
Compared to other platform targeting universal quantum computation, neutral atoms in lattices have a natural advantage in terms of atomic qubit density, scalability, and the ability to perform parallel operations <cit.>.
By improving the connectivity in optical lattices using topological pumping, our work opens up new possibilities for quantum information processing based on atoms and molecules.
This includes applications in fermionic quantum computing <cit.>, symmetry-protected operations <cit.>, exchange-only qubits <cit.>, and computing based on quantum walks <cit.>.
§ ACKNOWLEDGEMENTS
We would like to thank Alex Baumgärtner and Peter Zoller for comments on a previous version of the manuscript.
We thank Alexander Frank for assistance with electronics equipment.
We acknowledge funding by the Swiss National Science Foundation (Grant No. 200020_212168, Advanced grant TMAG-2_209376, as well as Holograph UeM019-5.1), as well as Quantera dynamite PCI2022 132919.
56
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Daley et al.(2022)Daley,
Bloch, Kokail, Flannigan,
Pearson, Troyer, and Zoller]daley_practical_2022
author author A. J. Daley, author I. Bloch,
author C. Kokail, author S. Flannigan, author
N. Pearson, author M. Troyer, and author P. Zoller, title title
Practical quantum advantage in quantum simulation, https://doi.org/10.1038/s41586-022-04940-6 journal journal Nature volume 607, pages
667 (year 2022)NoStop
[Bharti et al.(2022)Bharti,
Cervera-Lierta, Kyaw, Haug,
Alperin-Lea, Anand, Degroote,
Heimonen, Kottmann, Menke,
Mok, Sim, Kwek, and Aspuru-Guzik]bharti_noisy_2022
author author K. Bharti, author A. Cervera-Lierta, author T. H. Kyaw, author T. Haug, author S. Alperin-Lea, author
A. Anand, author M. Degroote, author H. Heimonen, author J. S. Kottmann, author T. Menke, author W.-K. Mok,
author S. Sim, author
L.-C. Kwek, and author
A. Aspuru-Guzik, title
title Noisy intermediate-scale quantum algorithms, https://doi.org/10.1103/RevModPhys.94.015004 journal
journal Reviews of Modern Physics volume
94, pages 015004 (year 2022)NoStop
[Loss and DiVincenzo(1998)]loss_quantum_1998
author author D. Loss and author D. P. DiVincenzo, title title Quantum computation
with quantum dots, https://doi.org/10.1103/PhysRevA.57.120
journal journal Physical Review A volume 57, pages 120 (year
1998)NoStop
[Kandel et al.(2019)Kandel,
Qiao, Fallahi, Gardner,
Manfra, and Nichol]kandel_coherent_2019
author author Y. P. Kandel, author H. Qiao,
author S. Fallahi, author G. C. Gardner, author
M. J. Manfra, and author
J. M. Nichol, title
title Coherent spin-state transfer via Heisenberg exchange, https://doi.org/10.1038/s41586-019-1566-8 journal
journal Nature volume 573, pages 553 (year 2019)NoStop
[Beugnon et al.(2007)Beugnon, Tuchendler, Marion, Gaëtan, Miroshnychenko, Sortais,
Lance, Jones, Messin,
Browaeys, and Grangier]beugnon_two-dimensional_2007
author author J. Beugnon, author C. Tuchendler,
author H. Marion, author A. Gaëtan, author
Y. Miroshnychenko, author
Y. R. P. Sortais, author
A. M. Lance, author
M. P. A. Jones, author
G. Messin, author A. Browaeys, and author P. Grangier, title title
Two-dimensional transport and transfer of a single atomic qubit in optical
tweezers, https://doi.org/10.1038/nphys698 journal
journal Nature Physics volume 3, pages 696 (year 2007)NoStop
[Moses et al.(2023)Moses,
Baldwin, Allman, Ancona,
Ascarrunz, Barnes, Bartolotta, Bjork, Blanchard, Bohn, Bohnet, Brown, Burdick, Burton, Campbell, Campora, Carron, Chambers, Chan, Chen, Chernoguzov, Chertkov, Colina, Curtis, Daniel, DeCross, Deen, Delaney, Dreiling, Ertsgaard, Esposito, Estey, Fabrikant, Figgatt, Foltz, Foss-Feig, Francois, Gaebler, Gatterman, Gilbreth, Giles, Glynn, Hall, Hankin, Hansen, Hayes, Higashi, Hoffman, Horning, Hout, Jacobs, Johansen, Jones, Karcz, Klein, Lauria, Lee, Liefer,
Lu, Lucchetti, Lytle,
Malm, Matheny, Mathewson,
Mayer, Miller, Mills,
Neyenhuis, Nugent, Olson,
Parks, Price, Price,
Pugh, Ransford, Reed,
Roman, Rowe, Ryan-Anderson,
Sanders, Sedlacek, Shevchuk,
Siegfried, Skripka, Spaun,
Sprenkle, Stutz, Swallows,
Tobey, Tran, Tran,
Vogt, Volin, Walker,
Zolot, and Pino]moses_race-track_2023
author author S. Moses, author C. Baldwin,
author M. Allman, author R. Ancona, author
L. Ascarrunz, author
C. Barnes, author J. Bartolotta, author B. Bjork, author P. Blanchard, author M. Bohn, author J. Bohnet, author N. Brown,
author N. Burdick, author W. Burton, author
S. Campbell, author
J. Campora, author C. Carron, author J. Chambers, author J. Chan, author Y. Chen, author A. Chernoguzov,
author E. Chertkov, author J. Colina, author
J. Curtis, author R. Daniel, author M. DeCross, author D. Deen, author C. Delaney, author J. Dreiling,
author C. Ertsgaard, author J. Esposito, author
B. Estey, author M. Fabrikant, author C. Figgatt, author C. Foltz, author M. Foss-Feig, author D. Francois, author J. Gaebler, author T. Gatterman, author C. Gilbreth, author J. Giles, author E. Glynn, author A. Hall, author A. Hankin, author A. Hansen,
author D. Hayes, author B. Higashi, author
I. Hoffman, author B. Horning, author J. Hout, author R. Jacobs, author J. Johansen,
author L. Jones, author J. Karcz, author
T. Klein, author P. Lauria, author P. Lee, author D. Liefer, author S. Lu, author D. Lucchetti, author
C. Lytle, author A. Malm, author M. Matheny, author B. Mathewson,
author K. Mayer, author D. Miller, author
M. Mills, author B. Neyenhuis, author L. Nugent, author S. Olson, author J. Parks, author G. Price, author Z. Price, author M. Pugh, author A. Ransford, author A. Reed,
author C. Roman, author M. Rowe, author
C. Ryan-Anderson, author
S. Sanders, author J. Sedlacek, author P. Shevchuk, author P. Siegfried, author T. Skripka, author B. Spaun, author R. Sprenkle, author R. Stutz, author M. Swallows, author R. Tobey, author A. Tran, author T. Tran, author E. Vogt, author C. Volin, author
J. Walker, author A. Zolot, and author J. Pino, title title A
Race-Track Trapped-Ion Quantum Processor, https://doi.org/10.1103/PhysRevX.13.041052 journal journal Physical Review X volume 13, pages 041052 (year 2023)NoStop
[Bluvstein et al.(2023)Bluvstein, Evered, Geim, Li, Zhou, Manovitz, Ebadi,
Cain, Kalinowski, Hangleiter,
Bonilla Ataides, Maskara, Cong, Gao, Sales Rodriguez, Karolyshyn, Semeghini, Gullans,
Greiner, Vuletić, and Lukin]bluvstein_logical_2023
author author D. Bluvstein, author S. J. Evered, author A. A. Geim,
author S. H. Li, author H. Zhou, author
T. Manovitz, author
S. Ebadi, author M. Cain, author M. Kalinowski, author D. Hangleiter, author J. P. Bonilla Ataides, author N. Maskara, author I. Cong,
author X. Gao, author
P. Sales Rodriguez, author
T. Karolyshyn, author
G. Semeghini, author
M. J. Gullans, author
M. Greiner, author V. Vuletić, and author M. D. Lukin, title title
Logical quantum processor based on reconfigurable atom arrays, https://doi.org/10.1038/s41586-023-06927-3 journal journal Nature volume 626, pages
58 (year 2023)NoStop
[Jenkins et al.(2022)Jenkins, Lis, Senoo, McGrew, and Kaufman]jenkins_ytterbium_2022
author author A. Jenkins, author J. W. Lis,
author A. Senoo, author W. F. McGrew, and author A. M. Kaufman, title title Ytterbium Nuclear-Spin Qubits in an
Optical Tweezer Array, https://doi.org/10.1103/PhysRevX.12.021027 journal journal Physical Review X volume 12, pages 021027 (year 2022)NoStop
[Bravyi and Kitaev(2002)]bravyi_fermionic_2002
author author S. B. Bravyi and author A. Y. Kitaev, title title Fermionic Quantum
Computation, https://doi.org/10.1006/aphy.2002.6254 journal journal Annals of Physics volume 298, pages 210 (year
2002)NoStop
[Oka and Kitamura(2019)]oka_floquet_2019
author author T. Oka and author S. Kitamura, title title Floquet Engineering of Quantum
Materials, https://doi.org/10.1146/annurev-conmatphys-031218-013423 journal journal Annual Review of Condensed Matter Physics volume 10, pages 387 (year
2019)NoStop
[Citro and Aidelsburger(2023)]citro_thouless_2023
author author R. Citro and author M. Aidelsburger, title title Thouless pumping and
topology, https://doi.org/10.1038/s42254-022-00545-0 journal journal Nature Reviews Physics volume 5, pages 87 (year 2023)NoStop
[Nakajima et al.(2016)Nakajima, Tomita, Taie, Ichinose, Ozawa, Wang, Troyer, and Takahashi]nakajima_topological_2016
author author S. Nakajima, author T. Tomita,
author S. Taie, author
T. Ichinose, author
H. Ozawa, author L. Wang, author M. Troyer, and author Y. Takahashi, title title Topological Thouless
pumping of ultracold fermions, https://doi.org/10.1038/nphys3622
journal journal Nature Physics volume 12, pages 296 (year
2016)NoStop
[Lohse et al.(2016)Lohse,
Schweizer, Zilberberg, Aidelsburger, and Bloch]lohse_thouless_2016
author author M. Lohse, author C. Schweizer,
author O. Zilberberg, author M. Aidelsburger, and author I. Bloch, title
title A Thouless quantum pump with ultracold bosonic atoms in
an optical superlattice, https://doi.org/doi.org/10.1038/nphys3584
journal journal Nature Physics volume 12, pages 350 (year
2016)NoStop
[Koepsell et al.(2020)Koepsell, Hirthe, Bourgund, Sompet, Vijayan, Salomon, Gross, and Bloch]koepsell_robust_2020
author author J. Koepsell, author S. Hirthe,
author D. Bourgund, author P. Sompet, author
J. Vijayan, author G. Salomon, author C. Gross, and author I. Bloch, title title Robust
Bilayer Charge Pumping for Spin- and Density-Resolved Quantum
Gas Microscopy, https://doi.org/10.1103/PhysRevLett.125.010403
journal journal Physical Review Letters volume 125, pages 010403 (year 2020)NoStop
[Minguzzi et al.(2022)Minguzzi, Zhu, Sandholzer, Walter, Viebahn, and Esslinger]minguzzi_topological_2022
author author J. Minguzzi, author Z. Zhu,
author K. Sandholzer, author A.-S. Walter, author
K. Viebahn, and author
T. Esslinger, title title Topological Pumping in a Floquet-Bloch Band, https://doi.org/10.1103/PhysRevLett.129.053201 journal
journal Physical Review Letters volume
129, pages 053201 (year 2022)NoStop
[Mandel et al.(2003)Mandel,
Greiner, Widera, Rom,
Hänsch, and Bloch]mandel_controlled_2003
author author O. Mandel, author M. Greiner,
author A. Widera, author T. Rom, author
T. W. Hänsch, and author
I. Bloch, title title Controlled collisions for multi-particle entanglement of optically
trapped atoms, https://doi.org/10.1038/nature02008 journal journal Nature volume 425, pages 937 (year 2003)NoStop
[Steffen et al.(2012)Steffen, Alberti, Alt, Belmechri, Hild, Karski, Widera, and Meschede]steffen_digital_2012
author author A. Steffen, author A. Alberti,
author W. Alt, author
N. Belmechri, author
S. Hild, author M. Karski, author A. Widera, and author D. Meschede, title title
Digital atom interferometer with single particle control on a discretized
space-time geometry, https://doi.org/10.1073/pnas.1204285109
journal journal Proceedings of the National
Academy of Sciences volume 109, pages
9770 (year 2012)NoStop
[Robens et al.(2017)Robens,
Zopes, Alt, Brakhane,
Meschede, and Alberti]robens_low-entropy_2017
author author C. Robens, author J. Zopes,
author W. Alt, author
S. Brakhane, author
D. Meschede, and author
A. Alberti, title title Low-Entropy States of Neutral Atoms in
Polarization-Synthesized Optical Lattices, https://doi.org/10.1103/PhysRevLett.118.065302 journal
journal Physical Review Letters volume
118, pages 065302 (year 2017)NoStop
[Kumar et al.(2018)Kumar,
Wu, Giraldo, and Weiss]kumar_sorting_2018
author author A. Kumar, author T.-Y. Wu,
author F. Giraldo, and author D. S. Weiss, title
title Sorting ultracold atoms in a three-dimensional optical
lattice in a realization of Maxwell’s demon, https://doi.org/10.1038/s41586-018-0458-7 journal journal Nature volume 561, pages
83 (year 2018)NoStop
[González-Cuadra et al.(2023)González-Cuadra, Bluvstein, Kalinowski,
Kaubruegger, Maskara, Naldesi, Zache, Kaufman, Lukin, Pichler, Vermersch, Ye, and Zoller]gonzalez-cuadra_fermionic_2023
author author D. González-Cuadra, author D. Bluvstein, author M. Kalinowski, author R. Kaubruegger, author N. Maskara, author P. Naldesi,
author T. V. Zache, author A. M. Kaufman, author
M. D. Lukin, author
H. Pichler, author B. Vermersch, author J. Ye, and author P. Zoller, title title
Fermionic quantum processing with programmable neutral atom arrays, https://doi.org/10.1073/pnas.2304294120 journal journal Proceedings of the National Academy of Sciences volume 120, pages e2304294120 (year
2023)NoStop
[Luo et al.(2024)Luo,
Zhang, Koh, Wilson,
Chu, Holland, Rey, and Thompson]luo_momentum-exchange_2024
author author C. Luo, author H. Zhang, author V. P. W. Koh, author
J. D. Wilson, author
A. Chu, author M. J. Holland, author A. M. Rey, and author J. K. Thompson, title title
Momentum-exchange interactions in a Bragg atom interferometer suppress
Doppler dephasing, https://doi.org/10.1126/science.adi1393
journal journal Science volume 384, pages 551 (year
2024)NoStop
[Panda et al.(2024)Panda,
Tao, Egelhoff, Ceja,
Xu, and Müller]panda_coherence_2024
author author C. D. Panda, author M. Tao, author J. Egelhoff, author
M. Ceja, author V. Xu, and author H. Müller, title title
Coherence limits in lattice atom interferometry at the one-minute scale, https://doi.org/10.1038/s41567-024-02518-9 journal
journal Nature Physics volume 20, pages 1234 (year 2024)NoStop
[Ludlow et al.(2015)Ludlow,
Boyd, Ye, Peik, and Schmidt]ludlow_optical_2015
author author A. D. Ludlow, author M. M. Boyd,
author J. Ye, author
E. Peik, and author
P. Schmidt, title title Optical atomic clocks, https://doi.org/10.1103/RevModPhys.87.637 journal journal Reviews of Modern Physics volume 87, pages 637 (year 2015)NoStop
[Gross and Bloch(2017)]gross_quantum_2017
author author C. Gross and author I. Bloch, title title Quantum simulations with ultracold
atoms in optical lattices, https://doi.org/10.1126/science.aal3837
journal journal Science volume 357, pages 995 (year
2017)NoStop
[Hartke et al.(2022)Hartke,
Oreg, Jia, and Zwierlein]hartke_quantum_2022
author author T. Hartke, author B. Oreg,
author N. Jia, and author M. Zwierlein, title
title Quantum register of fermion pairs, https://doi.org/10.1038/s41586-021-04205-8 journal journal Nature volume 601, pages
537 (year 2022)NoStop
[Jaksch et al.(1999)Jaksch,
Briegel, Cirac, Gardiner, and Zoller]jaksch_entanglement_1999
author author D. Jaksch, author H.-J. Briegel,
author J. I. Cirac, author C. W. Gardiner, and author P. Zoller, title
title Entanglement of Atoms via Cold Controlled
Collisions, https://doi.org/10.1103/PhysRevLett.82.1975
journal journal Physical Review Letters volume 82, pages 1975 (year
1999)NoStop
[Daley et al.(2008)Daley,
Boyd, Ye, and Zoller]daley_quantum_2008
author author A. J. Daley, author M. M. Boyd,
author J. Ye, and author P. Zoller, title
title Quantum Computing with Alkaline-Earth-Metal
Atoms, https://doi.org/10.1103/PhysRevLett.101.170504 journal journal Physical Review Letters volume 101, pages 170504 (year
2008)NoStop
[Young et al.(2024)Young,
Geller, Eckner, Schine,
Glancy, Knill, and Kaufman]young_atomic_2024
author author A. W. Young, author S. Geller,
author W. J. Eckner, author N. Schine, author
S. Glancy, author E. Knill, and author A. M. Kaufman, title title An atomic
boson sampler, https://doi.org/10.1038/s41586-024-07304-4
journal journal Nature volume 629, pages 311 (year
2024)NoStop
[Scholl et al.(2023)Scholl,
Shaw, Finkelstein, Tsai,
Choi, and Endres]scholl_erasure-cooling_2023
author author P. Scholl, author A. L. Shaw,
author R. Finkelstein, author R. B.-S. Tsai, author
J. Choi, and author
M. Endres, http://arxiv.org/abs/2311.15580 title Erasure-cooling,
control, and hyper-entanglement of motion in optical tweezers (year 2023), note arXiv:2311.15580 [cond-mat,
physics:physics, physics:quant-ph]NoStop
[Norcia et al.(2024)Norcia,
Kim, Cairncross, Stone,
Ryou, Jaffe, Brown,
Barnes, Battaglino, Bohdanowicz, Brown, Cassella, Chen, Coxe, Crow, Epstein,
Griger, Halperin, Hummel,
Jones, Kindem, King,
Kotru, Lauigan, Li,
Lu, Megidish, Marjanovic,
McDonald, Mittiga, Muniz,
Narayanaswami, Nishiguchi, Paule, Pawlak, Peng, Pudenz, Rodríguez Pérez, Smull,
Stack, Urbanek, Van
De Veerdonk, Vendeiro, Wadleigh,
Wilkason, Wu, Xie,
Zalys-Geller, Zhang, and Bloom]norcia_iterative_2024
author author M. A. Norcia, author H. Kim,
author W. B. Cairncross,
author M. Stone, author A. Ryou, author
M. Jaffe, author M. O. Brown, author K. Barnes, author P. Battaglino, author T. C. Bohdanowicz, author A. Brown, author K. Cassella, author C.-A. Chen, author R. Coxe, author D. Crow, author J. Epstein, author
C. Griger, author E. Halperin, author F. Hummel, author A. M. W. Jones, author J. M. Kindem, author J. King,
author K. Kotru, author J. Lauigan, author
M. Li, author M. Lu, author E. Megidish, author J. Marjanovic, author M. McDonald, author T. Mittiga,
author J. A. Muniz, author S. Narayanaswami, author
C. Nishiguchi, author
T. Paule, author K. A. Pawlak, author L. S. Peng, author K. L. Pudenz, author D. Rodríguez Pérez, author A. Smull, author D. Stack,
author M. Urbanek, author R. J. M. Van De Veerdonk, author Z. Vendeiro, author
L. Wadleigh, author
T. Wilkason, author
T.-Y. Wu, author X. Xie, author E. Zalys-Geller, author X. Zhang, and author B. J. Bloom, title title Iterative Assembly of 171
Yb Atom Arrays with Cavity-Enhanced Optical Lattices, https://doi.org/10.1103/PRXQuantum.5.030316 journal journal PRX Quantum volume 5, pages
030316 (year 2024)NoStop
[Gyger et al.(2024)Gyger,
Ammenwerth, Tao, Timme,
Snigirev, Bloch, and Zeiher]gyger_continuous_2024
author author F. Gyger, author M. Ammenwerth,
author R. Tao, author
H. Timme, author S. Snigirev, author I. Bloch, and author J. Zeiher, title title
Continuous operation of large-scale atom arrays in optical lattices, https://doi.org/10.1103/PhysRevResearch.6.033104 journal journal Physical Review Research volume 6, pages 033104 (year
2024)NoStop
[Urvoy et al.(2019)Urvoy,
Vendeiro, Ramette, Adiyatullin, and Vuletić]urvoy_direct_2019
author author A. Urvoy, author Z. Vendeiro,
author J. Ramette, author A. Adiyatullin, and author V. Vuletić, title
title Direct Laser Cooling to Bose-Einstein
Condensation in a Dipole Trap, https://doi.org/10.1103/PhysRevLett.122.203202 journal
journal Physical Review Letters volume
122, pages 203202 (year 2019)NoStop
[Phelps et al.(2020)Phelps,
Hébert, Krahn, Dickerson,
Öztürk, Ebadi, Su, and Greiner]phelps_sub-second_2020
author author G. A. Phelps, author A. Hébert,
author A. Krahn, author S. Dickerson, author
F. Öztürk, author
S. Ebadi, author L. Su, and author M. Greiner, http://arxiv.org/abs/2007.10807
title Sub-second production of a quantum degenerate gas
(year 2020), note arXiv:2007.10807 [cond-mat,
physics:physics]NoStop
[Pür et al.(2023)Pür,
Hetzel, Quensen, Hüper,
Geng, Kruse, Ertmer, and Klempt]pur_rapid_2023
author author C. Pür, author M. Hetzel,
author M. Quensen, author A. Hüper, author
J. Geng, author J. Kruse, author W. Ertmer, and author C. Klempt, title title Rapid
generation and number-resolved detection of spinor rubidium Bose-Einstein
condensates, https://doi.org/10.1103/PhysRevA.107.033303
journal journal Physical Review A volume 107, pages 033303 (year
2023)NoStop
[Chomaz et al.(2023)Chomaz,
Ferrier-Barbut, Ferlaino, Laburthe-Tolra, Lev, and Pfau]chomaz_dipolar_2023
author author L. Chomaz, author I. Ferrier-Barbut, author F. Ferlaino, author B. Laburthe-Tolra, author B. L. Lev, and author T. Pfau, title title Dipolar physics: a review of
experiments with magnetic quantum gases, https://doi.org/10.1088/1361-6633/aca814 journal journal Reports on Progress in Physics volume
86, pages 026401 (year 2023)NoStop
[Cornish et al.(2024)Cornish, Tarbutt, and Hazzard]cornish_quantum_2024
author author S. L. Cornish, author M. R. Tarbutt, and author K. R. A. Hazzard, title title Quantum computation and
quantum simulation with ultracold molecules, https://doi.org/10.1038/s41567-024-02453-9 journal journal Nature Physics volume 20, pages 730 (year 2024)NoStop
[Trotzky et al.(2008)Trotzky, Cheinet, Fölling, Feld, Schnorrberger, Rey, Polkovnikov, Demler, Lukin, and Bloch]trotzky_time-resolved_2008
author author S. Trotzky, author P. Cheinet,
author S. Fölling, author M. Feld, author
U. Schnorrberger, author
A. M. Rey, author A. Polkovnikov, author E. A. Demler, author M. D. Lukin, and author I. Bloch, title title
Time-resolved observation and control of superexchange interactions with
ultracold atoms in optical lattices, @noop journal
journal Science volume 319, pages 295 (year 2008)NoStop
[Greif et al.(2013)Greif,
Uehlinger, Jotzu, Tarruell, and Esslinger]greif_short-range_2013
author author D. Greif, author T. Uehlinger,
author G. Jotzu, author L. Tarruell, and author T. Esslinger, title
title Short-Range Quantum Magnetism of Ultracold
Fermions in an Optical Lattice, https://doi.org/10.1126/science.1236362 journal journal Science volume 340, pages
1307 (year 2013)NoStop
[Dai et al.(2016)Dai,
Yang, Reingruber, Xu,
Jiang, Chen, Yuan, and Pan]dai_generation_2016
author author H.-N. Dai, author B. Yang, author A. Reingruber, author
X.-F. Xu, author X. Jiang, author Y.-A. Chen, author Z.-S. Yuan, and author J.-W. Pan, title title Generation and detection of
atomic spin entanglement in optical lattices, https://doi.org/10.1038/nphys3705 journal journal Nature Physics volume 12, pages 783 (year 2016)NoStop
[Barmettler et al.(2008)Barmettler, Rey, Demler, Lukin, Bloch, and Gritsev]barmettler_quantum_2008
author author P. Barmettler, author A. M. Rey,
author E. Demler, author M. D. Lukin, author
I. Bloch, and author
V. Gritsev, title title Quantum many-body dynamics of coupled double-well superlattices, https://doi.org/10.1103/PhysRevA.78.012330 journal
journal Physical Review A volume 78, pages 012330 (year 2008)NoStop
[Vaucher et al.(2008)Vaucher, Nunnenkamp, and Jaksch]vaucher_creation_2008
author author B. Vaucher, author A. Nunnenkamp, and author D. Jaksch, title title Creation of resilient
entangled states and a resource for measurement-based quantum computation
with optical superlattices, https://doi.org/10.1088/1367-2630/10/2/023005 journal
journal New Journal of Physics volume
10, pages 023005 (year 2008)NoStop
[Zhang et al.(2023)Zhang,
He, Sun, Zheng, Liu, Luo, Wang, Zhu,
Qiu, Shen, Wang,
Lin, Yu, Li, Xiao, Li, Yang, Jiang,
Dai, Zhou, Ma, Yuan, and Pan]zhang_scalable_2023
author author W.-Y. Zhang, author M.-G. He,
author H. Sun, author
Y.-G. Zheng, author
Y. Liu, author A. Luo, author H.-Y. Wang, author Z.-H. Zhu, author P.-Y. Qiu,
author Y.-C. Shen, author X.-K. Wang, author
W. Lin, author S.-T. Yu, author B.-C. Li, author B. Xiao, author M.-D. Li,
author Y.-M. Yang, author X. Jiang, author
H.-N. Dai, author Y. Zhou, author X. Ma, author Z.-S. Yuan, and author J.-W. Pan, title title Scalable Multipartite Entanglement Created
by Spin Exchange in an Optical Lattice, https://doi.org/10.1103/PhysRevLett.131.073401 journal
journal Physical Review Letters volume
131, pages 073401 (year 2023)NoStop
[Walter et al.(2023)Walter,
Zhu, Gächter, Minguzzi,
Roschinski, Sandholzer, Viebahn, and Esslinger]walter_quantization_2023
author author A.-S. Walter, author Z. Zhu,
author M. Gächter, author J. Minguzzi, author
S. Roschinski, author
K. Sandholzer, author
K. Viebahn, and author
T. Esslinger, title title Quantization and its breakdown in a Hubbard–Thouless pump, https://doi.org/10.1038/s41567-023-02145-w journal
journal Nature Physics volume 19, pages 1471 (year 2023)NoStop
[Zhu et al.(2024)Zhu,
Gächter, Walter, Viebahn, and Esslinger]zhu_reversal_2024
author author Z. Zhu, author M. Gächter,
author A.-S. Walter, author K. Viebahn, and author
T. Esslinger, title title Reversal of quantized Hall drifts at noninteracting and
interacting topological boundaries, https://doi.org/10.1126/science.adg3848 journal journal Science volume 384, pages
317 (year 2024)NoStop
[Viebahn et al.(2024)Viebahn, Walter, Bertok, Zhu, Gächter, Aligia, Heidrich-Meisner, and Esslinger]viebahn_interactions_2024
author author K. Viebahn, author A.-S. Walter,
author E. Bertok, author Z. Zhu, author
M. Gächter, author
A. A. Aligia, author
F. Heidrich-Meisner, and author
T. Esslinger, title title Interactions Enable Thouless Pumping in a Nonsliding
Lattice, https://doi.org/10.1103/PhysRevX.14.021049 journal journal Physical Review X volume 14, pages 021049 (year
2024)NoStop
[Trotzky et al.(2010)Trotzky, Chen, Schnorrberger, Cheinet, and Bloch]trotzky_controlling_2010
author author S. Trotzky, author Y.-A. Chen,
author U. Schnorrberger, author P. Cheinet, and author
I. Bloch, title title Controlling and Detecting Spin Correlations of Ultracold
Atoms in Optical Lattices, https://doi.org/10.1103/PhysRevLett.105.265303 journal
journal Physical Review Letters volume
105, pages 265303 (year 2010)NoStop
[Taie et al.(2022)Taie,
Ibarra-García-Padilla, Nishizawa,
Takasu, Kuno, Wei,
Scalettar, Hazzard, and Takahashi]taie_observation_2022
author author S. Taie, author E. Ibarra-García-Padilla, author N. Nishizawa, author Y. Takasu,
author Y. Kuno, author
H.-T. Wei, author R. T. Scalettar, author K. R. A. Hazzard, and author Y. Takahashi, title title
Observation of antiferromagnetic correlations in an ultracold SU(N)
Hubbard model, https://doi.org/10.1038/s41567-022-01725-6
journal journal Nature Physics volume 18, pages 1356 (year
2022)NoStop
[Weitenberg et al.(2011)Weitenberg, Endres, Sherson, Cheneau, Schauß, Fukuhara, Bloch, and Kuhr]weitenberg_single-spin_2011
author author C. Weitenberg, author M. Endres,
author J. F. Sherson, author M. Cheneau, author
P. Schauß, author T. Fukuhara, author I. Bloch, and author S. Kuhr, title title
Single-spin addressing in an atomic Mott insulator, https://doi.org/10.1038/nature09827 journal journal Nature volume 471, pages
319 (year 2011)NoStop
[Duan et al.(2003)Duan,
Demler, and Lukin]duan_controlling_2003
author author L.-M. Duan, author E. Demler, and author M. D. Lukin, title title Controlling Spin Exchange Interactions of
Ultracold Atoms in Optical Lattices, https://doi.org/10.1103/PhysRevLett.91.090402 journal
journal Physical Review Letters volume
91, pages 090402 (year 2003)NoStop
[Anderlini et al.(2007)Anderlini, Lee, Brown, Sebby-Strabley, Phillips, and Porto]anderlini_controlled_2007
author author M. Anderlini, author P. J. Lee,
author B. L. Brown, author J. Sebby-Strabley, author W. D. Phillips, and author J. V. Porto, title
title Controlled exchange interaction between pairs of neutral
atoms in an optical lattice, https://doi.org/10.1038/nature06011
journal journal Nature volume 448, pages 452 (year
2007)NoStop
[Impertro et al.(2024)Impertro, Karch, Wienand, Huh, Schweizer, Bloch, and Aidelsburger]impertro_local_2024
author author A. Impertro, author S. Karch,
author J. F. Wienand, author S. Huh, author
C. Schweizer, author
I. Bloch, and author
M. Aidelsburger, title
title Local Readout and Control of Current and Kinetic
Energy Operators in Optical Lattices, https://doi.org/10.1103/PhysRevLett.133.063401 journal
journal Physical Review Letters volume
133, pages 063401 (year 2024)NoStop
[Freedman et al.(2021)Freedman, Hastings, and Zini]freedman_symmetry_2021
author author M. H. Freedman, author M. B. Hastings, and author M. S. Zini, title title Symmetry Protected
Quantum Computation, https://doi.org/10.22331/q-2021-09-28-554
journal journal Quantum volume 5, pages 554 (year 2021), note arXiv:2105.04649 [quant-ph]NoStop
[Rudolph and Virmani(2023)]rudolph_two-qubit_2023
author author T. Rudolph and author S. S. Virmani, title title The two-qubit
singlet/triplet measurement is universal for quantum computing given only
maximally-mixed initial states, https://doi.org/10.1038/s41467-023-43481-y journal journal Nature Communications volume 14, pages 7800 (year 2023)NoStop
[DiVincenzo et al.(2000)DiVincenzo, Bacon, Kempe, Burkard, and Whaley]divincenzo_universal_2000
author author D. P. DiVincenzo, author D. Bacon,
author J. Kempe, author G. Burkard, and author
K. B. Whaley, title
title Universal quantum computation with the exchange
interaction, https://doi.org/10.1038/35042541 journal journal Nature volume 408, pages 339 (year 2000)NoStop
[Childs et al.(2013)Childs,
Gosset, and Webb]childs_universal_2013
author author A. M. Childs, author D. Gosset, and author Z. Webb, title title Universal Computation by Multiparticle
Quantum Walk, https://doi.org/10.1126/science.1229957
journal journal Science volume 339, pages 791 (year
2013)NoStop
[Busch et al.(1998)Busch,
Englert, Rzażewski, and Wilkens]busch_two_1998
author author T. Busch, author B.-G. Englert,
author K. Rzażewski, and author M. Wilkens, title title Two Cold Atoms in a Harmonic Trap, https://doi.org/10.1023/A:1018705520999 journal
journal Foundations of Physics volume
28, pages 549 (year 1998)NoStop
§ METHODS
§.§ Superlattice potential
The time-dependent optical lattice potential is given by
V( x,y,z,τ) =
-V_Xcos^2(kx+θ/2)
-V_Xintcos^2(kx)
-V_Ycos^2(ky)
-V_Zcos^2(kz)
-√(V_XintV_Z)cos(kz)cos(kx+φ_SL(τ))
-I_XZ√(V_XintV_Z)cos(kz)cos(kx-φ_SL(τ)),
where k=2π/λ, λ=1064 and the imbalance factor is given by I_XZ = 0.777(3). The lattice depths V_X, V_Xint, V_Y and V_Z are listed in Table <ref> in units of the recoil energy E_rec = h^2/2mλ^2, where m is the atomic mass.
In our experiments, the lattice depth along the y- and z-directions is sufficiently large to effectively freeze the dynamics in these two directions.
The resulting potential creates a superlattice structure along the x-direction, characterised by staggered tunnellings t_x(τ) and t_x'(τ), as well as a staggered site offset Δ(τ) (Fig. <ref>a).
A simple linear ramp of the superlattice phase φ_SL, imprinted via an acousto-optic modulator, leads to a periodic modulation of the model parameters with a period T (Fig. <ref>b).
§.§ Topological pumping
The periodic modulation of the lattice potential can be considered as a static `short' lattice with a moving `long' lattice, describing a Thouless pump <cit.> (Fig. <ref>a).
Compared to its classical counterpart, the quantum nature of the Thouless pump is manifested in its directional dependence of the motional states.
In a band-structure picture, the pump in a bipartite one-dimensional lattice potential features two topologically distinct bands.
The topological properties of these bands are characterised by Chern numbers. These numbers are derived by mapping the space- and time-periodic Hamiltonian onto a time-independent 2D Harper-Hofstadter-Hatsugai (HHH) model, which incorporates both a real and a synthetic dimension.
The two lowest bands of the HHH model have Chern numbers C = ± 1 giving rise to quantised transport of two lattice sites per period in opposite directions <cit.>.
In contrast to bichromatic setups of topological pumps <cit.>, the depth of the `moving' lattice in our case is also periodically modulated. This modulation occurs automatically due to the time dependence of the two terms proportional to √(V_XintV_Z) in Eq. <ref> and requires no change in laser intensities.
The modulation ensures a smoother time evolution of the topological bandgap and it increases the duration of the superexchange interaction (width of the feature in Fig. <ref>c).
§.§ Realisation of (swap)^α gates
We model two particles meeting in a unit cell by considering a double-well with sites labelled by L and R. With Hubbard interactions U, the corresponding Hamiltonian is given by
Ĥ_DW = -t_x ∑_σ=↑,↓[(ĉ^†_Lσĉ_Rσ+h.c.) + Δ(n̂_Lσ -n̂_Rσ)]
+ ∑_α=L,R Un̂_α↑n̂_α↓,
where ĉ^(†)_α,σ is the fermionic annihilation (creation) on site α=L,R with spin σ=↑,↓ and n_ασ=c^†_ασc_ασ.
We write |σ,σ'⟩ = ĉ^†_Lσĉ^†_Rσ'|0⟩.
Since |↑,↑⟩ and |↓,↓⟩ both have energy zero and do not couple to any other states, we can write the Hamiltonian in the reduced Hilbert space spanned by {|↑↓,0⟩, |↑,↓⟩, |↓,↑⟩,|0,↑↓⟩} as
Ĥ_DW = [ U + 2Δ -t_x t_x 0; -t_x 0 0 -t_x; t_x 0 0 t_x; 0 -t_x t_x U - 2Δ ]
In the limit U≫Δ,t_x, an effective low-energy Hamiltonian can be derived with a Schrieffer-Wolff transformation <cit.>. After projecting out the high-energy double occupancies (|↑↓,0⟩ and |0,↑↓⟩), the Hamiltonian reads
Ĥ_eff = 1/2[ -J_ex J_ex; J_ex -J_ex ],
where the superexchange energy is given by
J_ex = 4t_x^2/U(1-(2Δ/U)^2).
This Hamiltonian can also be written in `Heisenberg' form Ĥ_eff=J_ex(Ŝ_L·Ŝ_R - 1/4), where Ŝ_α = ∑_i,jĉ^†_α iσ_ijĉ_α j is the spin operator on site α and σ=(σ_x,σ_y,σ_z)^T is the vector of Pauli operators.
The time evolution operator in the Hilbert space without double occupancies but with the states |↑,↑⟩ and |↓,↓⟩ takes the form
Û_α = [ 1 0 0 0; 0 (1+e^iπα)/2 (1-e^iπα)/2 0; 0 (1-e^iπα)/2 (1+e^iπα)/2 0; 0 0 0 1 ],
realising a (swap)^α gate, where α = 1/ħπ∫_τ_start^τ_endJ_ex(τ) dτ <cit.>. Note that in our realisation, t_x(τ) and Δ(τ) are periodically modulated (Fig. <ref>b), as is J_ex(τ).
In the experiment we are not fully in the limit U≫Δ,t_x. We therefore use the energy difference of the two lowest eigenstates of the full double-well Hamiltonian (Eq. <ref>) for the calculation of superexchange energy, which is used for the upper x-axis in Fig. <ref>g. Fig. <ref>c shows the resulting J_ex. During the staggered lattice configuration (± 0.25T), J_ex is negligible and increases (decreases) by several orders of magnitude as the balanced double-well configuration is entered (exited). Finally, Fig. <ref>d shows the Hubbard U, the maximum of Δ and t_x, and the resulting maximum of J_ex during a pump cycle as a function of lattice depth V_X.
§.§ Singlet preparation
We load an evaporatively cooled, balanced spin mixture of fermionic potassium-40 atoms in the magnetic states F=9/2, m_F={-9/2, -7/2} into a crossed dipole trap and further evaporatively cool it, yielding 5.3(2) × 10^4 atoms at a temperature of 0.102(6) times the Fermi temperature.
The experimental sequence to achieve a high fraction of doubly occupied unit cells with two atoms in the spin-singlet configuration is shown in Fig. <ref>. First, we use the Feshbach resonance at 201.1G to tune the s-wave interactions between atoms in the -9/2 and -7/2 sublevels to be strongly attractive. The atoms are then loaded into a shallow chequerboard lattice over 200 ms and subsequently into a deep chequerboard lattice over 10 ms, resulting in a high fraction of paired atoms in the -9/2 and -7/2 sublevels <cit.>. To achieve strongly repulsive interactions, we transfer the -7/2 population to the -5/2 sublevel using a Landau-Zener sweep and adjust the magnetic field to reach the target scattering length. Finally, by ramping down V_X and ramping up V_Xint, the chequerboard lattice is split, resulting in between 60% and 75% of doubly occupied unit cells in the spin-singlet configuration.
§.§ Product state preparation
The loading sequence for the product state |↓,↑⟩ is shown in Fig. <ref>. We start by loading singlets into a deep double-well configuration with the same sequence as in the singlet state preparation, but ramping to different lattice potentials during the split. In this lattice, both tunnellings t_x and t_x' are negligible. Then we apply a magnetic field gradient Δ B, which causes a rotation around the x-axis on the two-particle Bloch sphere with singlet state |s⟩=(|↓,↑⟩-|↑,↓⟩)/√(2) on the north pole and triplet state |t⟩=(|↓,↑⟩+|↑,↓⟩)/√(2) on the south pole (Fig. <ref>d). To see this, we write the Hamiltonian in Eq. <ref> in the {|s⟩,|t⟩} basis,
in which it takes the form
Ĥ_eff = 1/2[ J_ex 0; 0 -J_ex ].
The magnetic field gradient couples the |s⟩ and the |t⟩ states (singlet-triplet oscillation, STO),
Ĥ_STO = 1/2[ J_ex Δ_↑↓; Δ_↑↓ -J_ex ] ,
where Δ_↑↓ is the energy offset between |↑,↓⟩ and |↓,↑⟩ induced by the gradient.
This can be rewritten as Ĥ_STO = (J_exσ̂_z + Δ_↑↓σ̂_x)/2, where σ̂_x,z are Pauli matrices. Therefore, a time evolution under this Hamiltonian in the frozen lattice configuration, where J_ex=0, for a time τ=ħπ/2Δ_↑↓ rotates the state vector to the equatorial state |i_-⟩=(|↓,↑⟩-i|↑,↓⟩)/√(2). We calibrate this duration in a separate measurement. After this rotation, we turn off the magnetic-field gradient, unfreeze the lattice by ramping down V_X and pump for a quarter pump-cycle, such that the integrated J_ex corresponds to a π/2 rotation around the z-axis of the Bloch sphere, arriving at the target state |↑,↓⟩ in the staggered lattice configuration.
§.§ Experimental sequence
In Fig. <ref> we show the experimental sequence used for the STO measurements shown in Fig. <ref>, Fig. <ref> and Fig. <ref>. After state preparation, we start the pump by linearly ramping φ_SL with a positive slope 2π/T. We tune the superexchange interaction J_ex by varying the lattice depth V_X between operation cycles, thereby implementing different gates.
The pump is then halted in the staggered lattice configuration to perform the STO.
The lattice dynamics is frozen by ramping up V_X and by applying a magnetic-field gradient Δ B for a certain time τ_STO. After turning off the magnetic-field gradient and unfreezing the lattice, we reverse the pump by ramping φ_SL with a slope -2π/T for the same number of pump cycles to bring the atoms back to their initial position. We then measure the nearest-neighbour singlet fraction in the final state as a function of τ_STO. The last two steps (reversed pump and singlet detection) should be considered together as a detection scheme for long-distance correlation.
The entire sequence can be understood as a many-body atom interferometer.
§.§ Detection of singlet fraction and double occupancy
To measure the fraction of spin-singlet in a doubly occupied unit cell, we first freeze the dynamics by quenching into a deep simple cubic lattice with half the periodicity of the double wells within 100. We clean the remaining double occupancies, which are suppressed by the strongly repulsive Hubbard interactions, by transferring all atoms in the -5/2 to the -3/2 state. When another atom in the -9/2 state is present, they will collide and leave the trap. After that, we transfer the remaining -3/2 population back to -7/2. Then, we merge adjacent sites and ramp the Hubbard U to the attractive regime. Singlets form double occupancies in the lowest band, while the Pauli exclusion principle forces triplets to convert to one atom in the lowest band and one in the first excited band.
This enables us to measure the fraction of singlets by detecting the double occupancies in the merged lattice. We sweep the magnetic field over the -7/2 and -9/2 Feshbach resonance then apply a Landau-Zener RF sweep. The interaction shift then causes only the -7/2 population on doubly-occupied sites to be transferred to the -5/2 state. The Zeeman sublevels are then separated by applying a magnetic-field gradient and 8 time of flight <cit.>.
To measure ⟨σ̂_y⟩, we apply an additional STO corresponding to a π/2 and a 3π/2 rotation about the x-axis of the two-particle Bloch sphere before merging. This converts the projection on the y-axis to the projection on the z-axis. We then determine ⟨σ̂_y⟩ by calculating the difference in singlet fraction for the π/2 and 3π/2 rotations, normalised by the initial fraction of doubly occupied unit cells.
In Fig. <ref>c we directly measure the double occupancies in the simple cubic lattice without cleaning and merging adjacent sites. The double occupancy measurement relies on an energy shift and a spectroscopic transfer into a third hyperfine state, conditioned on the presence of a second atom in the same orbital <cit.>.
The double-occupancy detection is thus orbital-selective. To determine the shuttle operation fidelity, we calculate the average decay constant, β, from two exponential fits and convert it to a fidelity using the formula F=e^-1/β.
§.§ Analysis of STO with multi-frequency fit and FFT
Since the energy difference under a homogeneous magnetic gradient between |↑,↓⟩ and |↓,↑⟩ states is linear in the separation of the atom pair, we expect the time-trace of the STO to be a superposition of sine-waves with frequencies sf_1, where s∈ℕ^+ is the atom pair separation in number of lattice sites and f_1 is the base STO frequency at a distance of one lattice site. We use the fit function
F_singlet(τ) = e^-Γτ∑_s=1^12 A_ssin(2π sf_1τ+θ_s)
+ F_0.
The fit parameters are Γ,A_s,θ_s and F_0. We also calculate the Fast Fourier transform (FFT) of the time traces in Fig. <ref> to validate our fit function. The FFT for different gate sequences can be seen in Fig. <ref>. The dominant peaks are all at integer multiples of the base frequency f_1, which justifies the choice of fit function.
|
http://arxiv.org/abs/2409.02101v1 | 20240903175651 | Towards Real-World Adverse Weather Image Restoration: Enhancing Clearness and Semantics with Vision-Language Models | [
"Jiaqi Xu",
"Mengyang Wu",
"Xiaowei Hu",
"Chi-Wing Fu",
"Qi Dou",
"Pheng-Ann Heng"
] | cs.CV | [
"cs.CV",
"cs.MM"
] |
Towards Real-World Adverse Weather Image Restoration
J. Xu et al.
The Chinese University of Hong Kong Shanghai Artificial Intelligence Laboratory
Towards Real-World Adverse Weather Image Restoration: Enhancing Clearness and Semantics with Vision-Language Models
Jiaqi Xu1 Mengyang Wu1 Xiaowei Hu2,Corresponding author ([email protected])
Chi-Wing Fu1 Qi Dou1 Pheng-Ann Heng1
September 9, 2024
============================================================================================================================
§ ABSTRACT
This paper addresses the limitations of adverse weather image restoration approaches trained on synthetic data when applied to real-world scenarios.
We formulate a semi-supervised learning framework employing vision-language models to enhance restoration performance across diverse adverse weather conditions in real-world settings.
Our approach involves assessing image clearness and providing semantics using vision-language models on real data, serving as supervision signals for training restoration models.
For clearness enhancement, we use real-world data, utilizing a dual-step strategy with pseudo-labels assessed by vision-language models and weather prompt learning.
For semantic enhancement, we integrate real-world data by adjusting weather conditions in vision-language model descriptions while preserving semantic meaning.
Additionally, we introduce an effective training strategy to bootstrap restoration performance.
Our approach achieves superior results in real-world adverse weather image restoration, demonstrated through qualitative and quantitative comparisons with state-of-the-art works.
0
Existing adverse weather image restoration approaches trained on synthetic data face limitations in real-world applicability.
This paper addresses this challenge by introducing a semi-supervised learning framework that leverages vision-language models to enhance image restoration performance in real-world scenarios across diverse adverse weather conditions.
We adopt vision-language models to evaluate the clearness and provide the semantics of images under diverse adverse weather conditions, thus providing supervision signals to train the restoration models using real-world adverse weather data.
To enhance clearness, we employ real-world data for model training, assessing clarity with large vision-language models adept at recognizing diverse weather-related scenes; this involves a dual-step strategy using pseudo-labels generated by the vision-language model, followed by weather prompt learning to fine-tune and supervise the image restoration process.
To enhance semantics, we integrate real-world data using descriptions from vision-language models, uniquely adjusting weather conditions in descriptions while maintaining semantic meaning, providing a comprehensive strategy for adverse weather image restoration that encompasses both visual clarity and semantic context.
Finally, we introduce an effective training strategy to reduce computational burdens, and our experimental results demonstrate the superiority of our method over state-of-the-art approaches in real-world adverse weather image restoration, both qualitatively and quantitatively.
§ INTRODUCTION
Images captured under challenging weather conditions, such as rain, haze, and snow, are plagued by a variety of artifacts that significantly affect the image quality.
These imperfections severely impair the efficacy of outdoor vision systems.
Previous research efforts <cit.> have primarily focused on developing specialized techniques for mitigating the effects of individual weather phenomena, tailoring their models to the unique characteristics of rain, haze, or snow.
More recently, all-in-one adverse weather removal works <cit.> design single model-based methods to restore images captured under multiple adverse weather conditions.
Despite the encouraging outcomes demonstrated on synthetic datasets by these approaches, their applicability to real-world scenarios remains notably constrained.
The limited generalization capability in real-world adverse weather images can be attributed to two main factors.
Firstly, adverse weather removal methods are predominantly trained on synthetic datasets <cit.>, resulting in a domain gap when applied to real-world situations. Secondly, these methods primarily focus on restoring the visual clarity of images, often neglecting the semantic context of the scenes they depict.
Consequently, current weather removal approaches struggle with real-world data and offer marginal enhancements to downstream high-level vision tasks under adverse weather conditions.
In response to these challenges, this work introduces a novel semi-supervised learning framework, WResVLM, that explores vision-language models (VLMs) <cit.> to enhance image restoration in real-world scenarios across diverse adverse weather conditions.
The real-world images with the weather-related artifacts are used as the unlabeled (unpaired) data to train image restoration models and the supervision signals are provided by the large vision-language models.
As depicted in <ref>, large VLMs play a crucial role in assessing the clearness levels and providing semantics information of images under adverse weather conditions.
This capability proves instrumental in training image restoration models effectively, enabling them to handle the complexities of real-world data.
To enhance the clearness of the restored images produced by the restoration model, we utilize real-world data for model training, evaluating image clarity with the assistance of large vision-language models. These models, exposed to a diverse array of weather conditions during training, demonstrate proficiency in recognizing and distinguishing various weather-related scenes.
The approach involves two key steps: initially, the vision-language model is employed to assess images and select pseudo-labels for training the restoration model.
Subsequently, weather prompt learning is introduced to tailor the VLM, ultimately utilizing it to modulate the image restoration process.
This dual-step strategy enhances the restoration model's ability to address the real-world weather complexities and improve the overall clearness of the restored images.
To enhance the semantics of the restored images, we further integrate real-world data into the model training.
This involves utilizing descriptions generated by vision-language models associated with each image, providing rich semantic information about the scene and adverse weather conditions.
A unique aspect of our method involves adjusting the weather clues in the descriptions while maintaining the semantic meaning unchanged.
This enables the training of the image restoration model to specifically target the removal of weather-related artifacts without altering the image's underlying semantics.
In contrast to methods that might overlook semantic cues, our framework incorporates vision-language models to encompass both visual clarity and semantic context, thus presenting a more comprehensive strategy for adverse weather image restoration.
Lastly, we develop a training strategy aimed at achieving effective pseudo-label initialization and iterative updates, with the primary goal of improving restoration outcomes.
We conduct experiments using real-world images captured under diverse adverse weather conditions.
The results demonstrate that our method significantly surpasses both state-of-the-art adverse weather image restoration approaches and general image restoration methods.
Code and data are available at https://github.com/jiaqixuac/WResVLMGitHub.
§ RELATED WORK
§.§ Image Restoration in Adverse Weather Conditions
Previous works focus on restoring images captured under specific weather conditions, including deraining <cit.>, dehazing <cit.>, and desnowing <cit.>.
Recent works <cit.> focus on all-in-one adverse weather removal, which restores images captured under various weather conditions using a single model.
The pioneering All-in-One <cit.> achieves this by using joint training and a unified set of model weights.
TransWeather <cit.> introduces a transformer-based architecture while Chen <cit.> leverage knowledge distillation and contrastive learning.
WeatherDiff <cit.> adapts the diffusion model for adverse weather artifact removal.
Zhu <cit.> learn weather-general and weather-specific features through multiple sets of model weights.
AWRCP <cit.> enhances image restoration by exploring high-quality codebook priors.
Domain adaptation technique is also utilized to handle mixed weather conditions <cit.>.
More recent works explore prompting <cit.>, textual information <cit.>, and customizing pre-trained diffusion models <cit.>.
PromptIR <cit.> enhances the all-in-one restoration by predicting degradation-conditioned prompts.
DA-CLIP <cit.> learns the degradation information through image-text contrastive learning.
The prior approaches typically rely on paired synthetic data <cit.> for training and evaluation, demonstrating promising results in synthetic benchmarks.
However, the trained models exhibit limited generalization capabilities toward complex real-world scenarios due to the domain gap.
Additionally, WeatherStream <cit.> attempts to compile a dataset of real degenerated images with corresponding ground truth,
yet suffering from low image quality issues, , compression artifacts.
§.§ Vision-Language Models
Vision-language models merge computer vision and natural language processing.
CLIP <cit.> pioneered text and image alignment through large-scale pre-training.
CLIP's versatility is also demonstrated in image manipulation fields, including backlit image enhancement <cit.> and novel concept generation <cit.>.
Recent advances, including GPT-4 <cit.> and Llama <cit.>, demonstrate impressive conversational abilities.
Large VLMs like LLaVA <cit.> excel in high-level multimodal visual question answering.
Recent works reveal that vision-language models are also applicable to low-level applications.
CLIP-IQA <cit.> and LIQE <cit.> expand upon the CLIP-like architecture for image quality assessment, showcasing VLMs' adaptability to technical evaluations of image quality.
Q-Bench <cit.> highlights VLMs' inherent low-level perceptual capabilities.
These works, however, focus primarily on general technical image quality assessment and show limited abilities to help image restoration under adverse weather conditions.
§ METHODOLOGY
In this work, we introduce a novel semi-supervised learning framework for all-in-one adverse weather image restoration, leveraging both labeled synthetic images and unlabeled real images.
Our motivation emphasizes the necessity to improve image restoration in the real world.
Current approaches, mostly trained on synthetic images, struggle with generalization when handling real-world adverse weather images.
They frequently overlook the image context related to weather-related artifacts in real data, resulting in their limited effectiveness.
0
In this paper, the image restoration model is trained on two main datasets: 𝒟^l = {(x_i^l,y_i^l)}_i=1^N, comprising labeled data, where degraded synthetic images x_i^l are paired with ground-truth images y_i^l; and 𝒟^u = {x_i^u}_i=1^M, consisting of unlabeled real-world images.
For 𝒟^l, we adopt the common appearance loss to train the network.
For 𝒟^u, we select clear pseudo-labels for the unlabeled images and compose regularization towards weather artifact-free.
To achieve this, we explore the knowledge from several vision-language models (VLMs) to enhance the clearness and semantics of the restored images.
Below, we first describe the designed overall framework architecture (<ref>), then introduce how to adopt VLMs to enhance the clearness (<ref>) and semantics (<ref>) of the resorted images respectively, and lastly elaborate the training strategies (<ref>).
<Ref> shows the overall pipeline of our proposed semi-supervised learning framework for all-in-one adverse weather image restoration in real-world situations. This framework adopts several VLMs to improve the images' Clearness and Semantics during the removal of weather-related artifacts.
§.§ Enhancing Image Clearness through Vision-Language Models
Restoring images in adverse weather conditions involves eliminating weather-related artifacts to generate “clean” images.
Attaining clearness is a primary goal in adverse weather image restoration, especially in the real world.
In the absence of ground truth (clean) images for real-world data, the main challenge lies in determining the quality of restored images.
Moreover, limited learning objectives are designed to enhance image clearness under weather-related conditions.
Large vision-language models, trained on diverse data and vast weather imagery, exhibit strong representation abilities for image quality assessment.
Additionally, with the help of prompt learning, the VLMs can better distinguish well-restored images from those degraded by rain, haze, or snow.
To achieve this, we suggest two steps.
First, we employ the large vision-language models to assess the images and provide pseudo-labels for training the restoration model.
Then, we introduce weather prompt learning to empower the VLM's ability to identify clearness, ultimately utilizing it for modulating image restoration.
§.§.§ Image Assessment and Pseudo-Labeling.
Our goal is to improve the restoration of real adverse weather images using unlabeled data.
This involves training restoration models with pseudo-labels generated from the unlabeled images.
To ensure high-quality pseudo-labels for the subsequent model training, we establish a pseudo-label database, utilizing the zero-shot capability of large vision-language models to assess adverse weather image restoration.
Image assessment.
Given the real adverse weather images and the corresponding predictions from deweathering methods, a critical issue is to measure the image quality of the restored images.
Existing methods <cit.> for low-level image quality assessment focus mainly on technical distortions, including noise, blur, and compression artifacts.
There, however, exists a situation where an image that suffers from adverse weather is of “good” image quality, with little common noise, yet the visibility is largely degraded due to rain, haze, and snow.
Hence, it is imperative to find an effective way to automatically evaluate the image quality in the context of adverse weather artifact removal.
Inspired by recent works <cit.> that vision-language models perform the zero-shot image quality assessment with appropriate prompting, we present to uncover the potential of VLMs for assessing the adverse weather image restoration.
Technically, we prompt the VLMs with weather-related image quality questions and convert the VLMs' responses into numerical scores.
In detail, we first design the conversion templates for enquiring about the VLM responses to assess the image as illustrated in <ref> (a).
Then, we adopt the commonly used five-scale ratings in the mean opinion score (MOS) studies, , excellent, good, fair, poor, and bad, which correspond to the scores between one and five.
After that, we calculate the VLM-based rating r^vlm by converting the VLMs' predicted probabilities over these five-word tokens into numerical scores using softmax:
r^vlm = ∑_i=1^5 i × p_i, p_i = σ(l)_i = e^l_i/∑_j=1^5 e^l_j ,
where p_i denotes the probability for rating i ∈{1,2,3,4,5}, l_i denotes the logit extracted from the language model for rating token i, and σ is the softmax operation.
Thus, we obtain the visibility assessment for each restored image.
Pseudo-labeling.
For the unlabeled real adverse weather image set 𝒟^u, we assign and update the pseudo-labels 𝒟^ps={(x_i^u,y_i^ps) | x_i^u ∈𝒟^u}_i=1^M with desirable artifact-free pseudo-label images y_i^ps based on the VLM-based image assessment.
Through investigation, we observe that r^vlm is able to acquire better pseudo-labels with fewer weather-related artifacts, as shown in <ref> (b).
Initially, a pseudo-label database is constructed to store the current optimal pseudo-labels for the unlabeled images.
Subsequently, throughout the model training process, we evaluate the VLM-based image visibility rating score for both the model's prediction and the recorded pseudo-labels. If the model achieves a superior restoration, we update the pseudo-label database accordingly <cit.>.
In practice, we use predictions from the teacher model <cit.> for comparison, which is an exponential moving average of the student model.
Lastly, we use the updated pseudo-labels to compute the pseudo-label loss for the online model:
ℒ_ps = ℒ_app(ŷ_i, y_i^ps) ,
where ŷ_i and y_i^ps are the prediction and the corresponding pseudo-label, respectively, and ℒ_app is any kind of appearance loss, , ℒ_1 as adopted.
§.§.§ Weather Prompt Learning.
We delve into the extensive knowledge embedded in the pre-trained vision-language model, capable of understanding the concept of images in both normal and adverse weather conditions.
Specifically, we anticipate the CLIP <cit.> model to be indicator aware of image weather conditions, such as clear, rainy, hazy, or snowy.
Subsequently, we leverage the learned concept of “clearness” to modulate the model's learning toward achieving clear restoration.
To enhance CLIP's ability to accurately differentiate weather in diverse scenarios, we employ the prompt learning approach to acquire prompt embeddings tailored to the image characteristics of each weather situation.
The Weather Prompt Learning process consists of two stages: the prompt embedding learning stage and the restoration model optimization stage; see <ref>.
Prompt embedding learning.
CLIP aligns images and text within a shared feature space.
Rather than relying on fragile prompt engineering involving hand-crafted text prompts such as “rainy” or “a rainy photo”, we adopt the prompt learning approach <cit.>.
Specifically, keeping the pre-trained CLIP model parameters fixed, we employ a set of four weather prompts t_c, t_r, t_h, t_s representing clear, rain, haze, and snow conditions as learnable vectors.
The weather prompts ℰ_T(t) are initialized in the embedding space of the CLIP's text encoder.
Meanwhile, the real images x in such clear, rainy, hazy, and snowy situations are collected, which are used to extract the reference image embeddings ℰ_I(x) through the CLIP's image encoder.
During the prompt embedding learning stage, the training objective is to minimize the classification loss, , the cross-entropy loss, by categorizing the weather prompts into their respective weather categories c:
p(c=i|x) = σ ( z_i ), z_i = cos(ℰ_I(x), ℰ_T(t_i)), where σ denotes softmax, and cos(·,·) denotes cosine similarity.
Note that the learnable weather prompt embeddings are the only parameters to be optimized during this stage; see <ref> (a).
Restoration model optimization.
With the acquired knowledge from the learned weather prompts, we direct the training of the restoration model to generate images with enhanced clearness.
Formally, during the restoration model optimization, the weather prompt learning loss ℒ_wpl maximizes the similarity between the image embedding ℰ_I(ŷ) of the model's restored image ŷ and the text embedding of the clear weather prompt ℰ_T(t_c):
ℒ_wpl = e^cos(ℰ_I(ŷ), ℰ_T(t_c))/∑_t ∈{t_c,t_r,t_h,t_s}e^cos(ℰ_I(ŷ), ℰ_T(t)) .
In initial investigations, employing only ℒ_wpl optimizes the model's prediction to reduce weather-related artifacts, yet the resulting image exhibits noticeable noise.
We hypothesize that there is room for plausible solutions within the space of minimizing the weather prompt learning loss.
To address this issue and regularize the model learning, a feature similarity loss is employed to align the model's prediction with both the pseudo-label y^ps and the input x^u:
ℒ_feat = 1/HW∑_i=1^HW (1 - cos(ĝ_i,g_i^*) ) ,
where ĝ,g^* are image features of ŷ as well as y^ps, x^u extracted from a pre-trained model, and H,W denote the spatial dimension of the feature space.
In practice, we adopt the visual encoder of Depth Anything <cit.> for feature extraction because of its robustness against various scenarios.
§.§ Enhancing Image Semantics through Vision-Language Models
Restoring images in adverse weather conditions entails not only improving image clarity but also restoring the semantics distorted by weather-related artifacts.
This contributes to the effectiveness of downstream vision tasks.
The potential to recover image semantics is frequently disregarded by existing works trained on synthetic data.
In this work, we introduce a method that leverages the image-text understanding capability of large vision-language models to enhance image semantics in the context of adverse weather image restoration.
§.§.§ Description-assisted semantic enhancement.
Restoring degraded images is inherently challenging; describing their weather-affected appearance with natural language is straightforward.
Our approach uses vision-language models to generate semantic descriptions of adverse weather images, capturing both scene context and weather conditions, including degradation levels.
The comprehensive workflow is illustrated in <ref>.
Given the input image, we employ a VLM, , LLaVA <cit.>, to generate the (negative) caption with the weather description.
For instance, “A person is walking along the street in the heavy rain ...” describes a scene with the object person in the weather of rain.
The description also provides additional environment context, like street.
Next, we transform negative scene description d_neg associated with the degraded image in adverse weather conditions into a pseudo-clear representation.
This transformation is achieved by prompting large language models, , Llama <cit.>, to generate positive description d_pos corresponding to the restored image.
Given the above negative description of the adverse weather, we can imagine its positive, clearly restored image, , “The weather looks good. A person is walking ...”
Intuitively, d_pos and d_neg should have similar descriptions of the image content, like object and environment, but dissimilar descriptions of the weather and visibility, , good versus bad weather.
Unlike the weather prompt in <ref>, d_pos,d_neg are tailored to a specific image; see <ref>.
§.§.§ Loss function.
The model training incorporates semantic-aware regularization, promoting predictions that align with positive descriptions indicative of good weather conditions.
Given the positive and negative descriptions, we formulate a description-assisted semantics regularization loss ℒ_sem:
ℒ_sem = e^cos(ℰ_I(ŷ), ℰ_T(d_pos))/∑_d ∈{d_pos,d_neg}e^cos(ℰ_I(ŷ), ℰ_T(d)) .
In initial trials, we observed that LLMs occasionally struggle with generating weather-varying descriptions that are content-invariant. To address this issue, we manually label certain negative-to-positive description conversions and introduce these examples in the in-context learning approach <cit.>.
Finally, the overall loss is a weighted combination of the supervised appearance loss ℒ_sup, semi-supervised pseudo-label loss ℒ_ps, weather prompt loss ℒ_wpl, description-assisted semantic loss ℒ_sem, and feature similarity loss ℒ_feat:
ℒ = ℒ_sup + w_1 ×ℒ_ps + w_2 ×ℒ_wpl + w_3 ×ℒ_sem + w_4 ×ℒ_feat ,
where w_1,w_2,w_3,w_4 are weights to balance the loss values.
§.§ Training Strategies
Training the model on the unlabeled set, particularly in the early stages, is challenging due to the domain gap between the real and synthetic data. We introduce a strategy to expedite model training by leveraging existing image restoration methods and our proposed VLM-based image assessment.
Additionally, we enhance model performance through iterative updates of pseudo-labels, weather prompts, and descriptions in rounds.
§.§.§ Pseudo-label initialization.
In the pseudo-label initialization stage, we gather the initial pseudo-labels by collecting the noisy restoration outcomes from both existing weather-specific and all-in-one image restoration methods.
Subsequently, we employ the VLM-based image assessment technique to filter out the noisy samples and select the best-restored images as the pseudo-labels to initialize the pseudo-label database.
To mitigate potential biases from a single vision-language model towards a specific image appearance, we utilize a diverse set of VLMs with varying architectures and parameters as experts for image assessment.
§.§.§ Iterative update.
Leveraging expertise from multiple VLMs for image assessment to select pseudo-labels enhances the model learning process.
However, due to computational constraints, it is impractical to consult every VLM during online learning.
Instead, we divide the overall training into multiple rounds.
In each round, only one VLM is employed for online image assessment.
After a round of training, the overall assessment using the set of VLMs for pseudo-labels is conducted, incorporating new predictions from the updated model.
Additionally, we update the weather prompts and augment the descriptions progressively.
Note that the round number is empirically set as four during the training.
§ EXPERIMENTAL RESULTS
§.§ Experimental Settings
§.§.§ Training & testing sets.
We adopt several (pseudo-)synthetic datasets for deraining, dehazing, and desnowing, including Outdoor-Rain <cit.>, RainDrop <cit.>, SPA <cit.>, OTS <cit.>, and Snow100K <cit.>.
Meanwhile, we leverage the unlabeled real-world adverse weather images for unsupervised learning.
To achieve this, we utilize the real hazy images in URHI <cit.> (2,318 images) and manually collect real-world rainy and snowy images from the Internet (2,433 and 2,018 images, respectively).
Besides, to train CLIP weather prompts, we employ high-quality DF2K <cit.> images for the clear category.
We adopt real adverse weather image datasets for qualitative and quantitative evaluation, including RTTS <cit.> with 4,322 haze images, DDN-SIRR <cit.> and Real3000 <cit.> with 2,320 rain images, and Snow100K Realistic <cit.> with 1,329 snow images.
Note that we remove the non-realistic images in Real3000, , comic and movie scenes.
§.§.§ Implementation details.
Our semi-supervised learning framework is easily compatible with various image restoration networks.
We opt for MSBDN <cit.> as our backbone due to its balanced performance and rapid inference speed in our main study.
Each batch comprises eight labeled and unlabeled images, with a training process spanning 40,000 iterations per round. Image assessment utilizes recent VLMs <cit.>.
Pseudo-labels are initialized through existing weather-specific and all-in-one adverse weather restoration methods.
Empirical values for w_1, w_2, w_3, w_4 are set as 0.5, 0.2, 0.05, 0.2, respectively.
Implementation is based on BasicSR <cit.> and training is performed on two NVIDIA A40 GPUs.
§.§ Comparisons with the State-of-the-Art Methods
We benchmark our method against several state-of-the-art general and all-in-one adverse weather image restoration approaches. Our comparisons encompass recent works including Restormer <cit.>, TransWeather <cit.>, TKL <cit.>, WeatherDiff <cit.>, WGWS-Net <cit.>, MWDT <cit.>, PromptIR <cit.>, and DA-CLIP <cit.>.
We compare with the best-performing models from either retrained versions using our paired data or officially released checkpoints for fairness.
§.§.§ Quantitative comparison.
Note that there is no ground-truth clear image for the real adverse weather images.
Therefore, we adopt several no-reference metrics for the quantitative assessment.
Specifically, we use recent blind image quality evaluation metrics, including NIMA <cit.>, MUSIQ <cit.>, CLIP-IQA <cit.>, LIQE <cit.>, and Q-Align <cit.>.
We also utilize the proposed VLM-based image visibility assessment method and report the normalized scores VLM-Vis.
In detail, VLM-Vis is computed over VLM experts, standardized by the minimum and maximum statistics across the dataset for each respective VLM.
The quantitative comparisons are reported in <Ref>.
Our proposed method is ranked first for all image quality assessment metrics on average and in almost all weather conditions.
These values indicate the superior restoration quality of the images.
Moreover, our method achieves the best VLM-Vis across different weather conditions.
These results demonstrate the advantages of our method on real data against existing advanced adverse weather image restoration methods, which focus mainly on synthetic data evaluation.
§.§.§ Qualitative comparison.
Our qualitative assessment is conducted on real-world evaluation datasets <cit.> and the visual outcomes are presented in <ref>.
We can observe that the compared methods are less effective in dealing with real-world adverse weather images and are limited in removing rain, haze, and snow artifacts.
It is noted that MWDT <cit.> mitigates the haze effect but introduces severe color distortion.
In comparison, our method exhibits superior visually perceptual quality, enhancing clarity and contrast, while minimizing rain, haze, and snow artifacts. Notably, our approach effectively eliminates haze in rain and snow scenarios, significantly improving image visibility.
§.§.§ User study.
We conducted a user study to evaluate the visual quality.
For each weather scenario, ten real-world images are chosen.
32 participants were invited for the evaluation.
Two factors are considered, , image visibility and quality, regarding the extent to which the weather-related artifacts are removed and the restored image is kept real.
As observed in <ref>, MWDT obtains high image visibility scores, which aligns with our VLM-Vis metric.
Overall, our method exhibits a clear advantage in visibility and quality across weather conditions.
§.§ Ablation Studies
§.§.§ Effectiveness of semi-supervised learning framework.
We start with the baseline model, trained exclusively through supervised learning (ℒ_sup) on labeled synthetic data. Subsequently, we employ the naive mean-teacher <cit.>, a semi-supervised learning method, to explore unlabeled real data and utilize predictions from the teacher network as pseudo-labels (ℒ_ps).
We investigate the effectiveness of the proposed VLM-based components and training strategies, including:
(1) Incorporating VLM-based image assessment r^vlm for updating pseudo-labels,
(2) Pseudo-label initialization (init),
(3) Weather prompt learning (ℒ_wpl),
(4) Semantics regularization (ℒ_sem), and
(5) Iterative update (iter).
Quantitative outcomes with overall performance across different weather conditions are presented in <Ref>, while visual comparisons are depicted in <ref>.
It is evident that the baseline, trained solely on synthetic data using a straightforward semi-supervised learning approach, struggles to effectively address real rain, haze, and snow artifacts.
In contrast, our proposed VLM-based image assessment progressively refines the selection of superior pseudo-labels, emphasizing higher clearness and resulting in predictions with improved visibility. This effect is further amplified with the incorporation of the pseudo-label initialization strategy.
Moreover, the proposed weather prompt learning and description-assisted semantic enhancement largely improve the restoration performance.
This is evidenced by the boosted image quality, the visibility metric scores, and the visual quality with reduced weather-related artifacts (<ref>).
Lastly, our iterative training strategy further enhances the overall quantitative and qualitative outcomes.
§.§.§ Impact of VLM-based image assessment.
We conduct experiments to investigate the VLM-based image assessment for selecting pseudo-labels.
We compare our proposed method with existing image quality assessment metrics, including NIMA <cit.>, MUSIQ <cit.>, CLIP-IQA <cit.>, and LIQE <cit.>, by replacing the pseudo-label update criteria.
As discussed in <ref>, our VLM-based rating approach can select pseudo-labels with less weather-related artifacts.
Consequently, the trained models show superior restoration ability, as illustrated in <ref>.
§.§.§ Analysis of semantics regularization.
We study the impact on the semantics regularization based on <ref>.
The VLM <cit.> can detect the nuanced difference of the restored image to be foggy or overcast.
By further monitoring the training process, we observe that the semantics-enhanced approach benefits learning by leading to better-stored pseudo-labels and subsequent training.
Hence, the model trained with the description-assisted semantics regularization ℒ_sem addresses the subtle weather context misalignment, improving the visual quality.
§ CONCLUSION
This paper advances real-world adverse weather image restoration using vision-language models, overcoming the limitations of methods trained on synthetic data.
By evaluating clearness and semantics in natural images, our semi-supervised approach trains models on real, unlabeled images.
Our dual-step strategy, combining image assessment and weather prompt learning, enhances clearness with real data.
Further, semantics enhancement adjusts weather conditions in vision-language model descriptions, addressing context semantics in adverse weather.
Experimental results show that our method outperforms state of the arts. Yet, the computational burden of using large VLMs remains a limitation.
§ ACKNOWLEDGEMENTS
The work was supported by the National Key R&D Program of China (Grant No. 2022ZD0160100), the Research Grants Council of the Hong Kong Special Administrative Region, China (Grant No. 14201620), and the Hong Kong Innovation and Technology Fund (Grant No. MHP/092/22).
splncs04
|
http://arxiv.org/abs/2409.03638v1 | 20240905155402 | Quantum Natural Gradient with Geodesic Corrections for Small Shallow Quantum Circuits | [
"Mourad Halla"
] | quant-ph | [
"quant-ph"
] |
CQTA, Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, 15738 Zeuthen, Germany
Quantum Natural Gradient with Geodesic Corrections for Small Shallow Quantum Circuits
Mourad Halla
September 9, 2024
=====================================================================================
§ ABSTRACT
The Quantum Natural Gradient (QNG) method enhances optimization in variational quantum algorithms (VQAs) by incorporating geometric insights from the quantum state space through the Fubini-Study metric. In this work, we extend QNG by introducing higher-order integrators and geodesic corrections using the Riemannian Euler update rule and geodesic equations, deriving an updated rule for the Quantum Natural Gradient with Geodesic Correction (QNGGC). QNGGC is specifically designed for small, shallow quantum circuits. We also develop an efficient method for computing the Christoffel symbols necessary for these corrections, leveraging the parameter-shift rule to enable direct measurement from quantum circuits. Through theoretical analysis and practical examples, we demonstrate that QNGGC significantly improves convergence rates over standard QNG, highlighting the benefits of integrating geodesic corrections into quantum optimization processes. Our approach paves the way for more efficient quantum algorithms, leveraging the advantages of geometric methods.
§ INTRODUCTION
Quantum computing represents a significant leap in computational science, enabling the resolution of problems that are fundamentally intractable for classical algorithms by leveraging quantum mechanical principles. Among the leading strategies in near-term quantum computing are Variational Quantum Algorithms (VQAs) <cit.>, which are tailored to harness the power of current noisy intermediate-scale quantum (NISQ) devices. A prominent example of VQAs is the Variational Quantum Eigensolver (VQE) <cit.>, a hybrid quantum-classical algorithm designed to find the ground state energy of quantum systems a task of paramount importance in areas such as quantum chemistry, materials science and condensed matter physics. VQE operates by iteratively optimizing a parameterized quantum state, known as an ansatz, using a combination of quantum and classical computations. Quantum processors are employed to prepare and measure the quantum state, while classical optimization algorithms adjust the parameters of the ansatz to minimize the cost function. This synergy between quantum evaluations and classical optimization enables VQE to efficiently explore the solution space, making it a powerful tool for solving complex problems such as those found in High-Energy Physics applications <cit.>.
Optimization techniques are crucial to the performance of VQAs, as they directly influence convergence rates and the quality of the solutions obtained. While traditional methods like vanilla gradient descent (GD) are commonly used due to their straightforward implementation, the unique challenges of quantum optimization landscapes characterized by non-convexity, noise, and barren plateaus demand more sophisticated techniques to enhance convergence and performance.
The Quantum Natural Gradient (QNG) algorithm <cit.>, a generalization of the natural gradient <cit.>, is an advanced optimization technique that enhances the optimization process by incorporating the geometry of the parameter space into the update rules. Unlike standard gradient descent, which assumes a flat parameter space, QNG leverages a Riemannian metric defined by the Fubini-Study metric or more generally the quantum Fisher information <cit.>, capturing the infinitesimal changes in distances between quantum states and aligning parameter updates with the natural curvature of the quantum state manifold. A key factor in QNG’s superior performance, as shown in <cit.>, is its ability to identify regions of high negative curvature early in the optimization process, significantly accelerating convergence. These regions play a crucial role in guiding the optimization along paths that lead to faster descent.
Following the foundational work by <cit.>, several extensions of the QNG have been developed to broaden its applicability and robustness. For example, the QNG was extended to handle noisy and nonunitary circuits, making it more suitable for realistic quantum devices that are subject to imperfections and decoherence <cit.>. Other works propose using simultaneous perturbation stochastic approximation techniques to approximate the QFIM in QNG <cit.>. Recently, <cit.> introduced two methods that significantly reduce the resources needed for state preparations required for QNG: the Random Natural Gradient and the Stochastic-Coordinate Quantum Natural Gradient.
The design of the ansatz, which serves as the parameterized quantum circuits in VQAs, plays a pivotal role in the algorithm’s success. The choice of ansatz directly influences the expressiveness and trainability of the quantum circuit, as well as its efficiency in hardware simulations, thus significantly impacting the overall performance of the VQA <cit.>. A well-designed ansatz must strike a balance between complexity and the capacity to accurately represent the target state, making the selection of the ansatz a crucial factor in the effective implementation of VQAs.
In QNG, Riemannian geometry <cit.> provides the framework to understand the geometric structure of the parameter space. The Fubini-Study metric, after Tikhonov-Regularization due to its general ill-definition in VQA, is a Riemannian metric that defines the intrinsic geometry of the quantum state space, guiding the optimization process by aligning parameter updates with the manifold’s curvature. Christoffel symbols describe how directions change as vectors are transported along this curved space, while geodesics represent the shortest and most efficient paths. Accurately computing these geodesics is essential for incorporating higher-order curvature corrections in QNG, enhancing optimization precision and overall performance.
The concept of using geodesic corrections to enhance optimization on manifolds was first introduced in classical optimization <cit.> and further developed in subsequent work <cit.>, primarily applied to nonlinear least squares problems under specific curvature assumptions. These methods highlighted the critical role of manifold geometry in boosting optimization performance. More recently, geodesic corrections were incorporated into natural gradient optimization within classical machine learning, demonstrating improved convergence by preserving higher-order invariance properties <cit.>. Building on these foundational ideas, our work integrates geodesic corrections into QNG, specifically tailored for variational quantum algorithms. This approach leverages the unique geometric characteristics of the quantum state space, enabling optimization that respects and exploits the manifold's inherent curvature for enhanced performance.
This manuscript presents a comprehensive approach to incorporating geodesic corrections into the QNG for applications in variational quantum algorithms. In Section <ref>, we provide an overview of differential geometry and geodesic equations, establishing the foundational mathematical context. Section <ref> delves into optimizing idealized variational quantum circuits using Quantum Natural Gradient Descent, emphasizing its effectiveness in quantum optimization. Section <ref> introduces higher-order integrators and derives the update rule for the Quantum Natural Gradient with Geodesic Correction (QNGGC). As the update rule relies on the Christoffel symbols of the second kind, Section <ref> is dedicated to efficiently computing these symbols using the parameter-shift rule, enabling direct measurements from quantum circuits. In Section <ref>, we apply the derived update rule to various examples: Examples 1 and 2 involve analytical calculations for numerical simulations, while Example 3 utilizes quantum software, specifically Qiskit, for practical simulations. Finally, in Section <ref>, we provide an outlook on potential extensions and future research directions, highlighting the integration of geodesic corrections into quantum optimization frameworks.
§ DIFFERENTIAL GEOMETRY AND GEODESIC EQUATIONS
To establish our notation, we will briefly review essential concepts in differential geometry relevant to our work. For readers interested in further details, we recommend <cit.> and <cit.> for introductions tailored to physicists, and <cit.> for more comprehensive technical discussions.
A manifold is a topological space that locally resembles Euclidean space and supports a consistent coordinate system. More formally, an n-dimensional manifold ℳ is a set equipped with a collection of coordinate charts {(U_i, φ_i)}, where each U_i ⊂ℳ is an open subset, and φ_i: U_i →ℝ^n is a homeomorphism, meaning that U_i is locally similar to ℝ^n. For ℳ to be a differentiable manifold, the transition maps between overlapping charts, φ_j ∘φ_i^-1: φ_i(U_i ∩ U_j) →φ_j(U_i ∩ U_j), must be smooth (infinitely differentiable). This smooth structure allows us to perform calculus on the manifold.
A Riemannian manifold is a differentiable manifold ℳ equipped with a metric tensor g_ij, which is a symmetric, positive-definite tensor field assigning an inner product to each tangent space T_pℳ at a point p ∈ℳ. In local coordinates {x^i}, the metric tensor defines the line element:
ds^2 = g_ij dx^i dx^j,
where g_ij = g_ji. The metric allows for measuring lengths and angles on the manifold. The length of a smooth curve γ: [a, b] →ℳ is given by:
L(γ) = ∫_a^b √(g_ij dγ^i/dτdγ^j/dτ) dτ.
A geodesic on a manifold is a curve whose velocity vector remains parallel to itself along the curve, representing the straightest possible path given the manifold’s geometry. This property is formally expressed by stating that the geodesic has zero covariant acceleration, which accounts for the curvature of the manifold rather than the usual notion of acceleration in Euclidean space. Mathematically, a curve γ(τ) is a geodesic if it satisfies the geodesic equation:
d^2 x^i/dτ^2 + Γ^i_jkdx^j/dτdx^k/dτ = 0,
where τ is an affine parameter along the curve, and Γ^i_jk are the Christoffel symbols of the second kind, defined by:
Γ^i_jk = 1/2 g^il( ∂_j g_lk + ∂_k g_lj - ∂_l g_jk).
The Christoffel symbols define the covariant derivative, which maps tensor fields to other tensor fields, adapting to the curvature of the manifold.
The exponential map is a tool in Riemannian geometry that relates the tangent space at a point to the manifold. Given a tangent vector v ∈ T_pℳ, the exponential map exp_p maps v to a point on the manifold reached by traveling along the geodesic starting at p with initial velocity v. Formally,
Exp_p(v) = γ(1),
where γ is the geodesic satisfying γ(0) = p and γ̇(0) = v. By rescaling the parameter v by a factor of h, the following relation holds:
Exp_p(h v) = γ(h).
To approximate Exp_p(hv), we expand γ(h) around h = 0 using a Taylor series:
γ(h) ≈γ(0) + hγ'(0) + h^2/2γ”(0) + 𝒪(h^3).
This expansion provides insight into the behavior of the geodesic near the starting point p. The first-order term, γ(0) + hγ'(0), corresponds to the initial position and velocity, giving a linear approximation of the geodesic. The second-order term, h^2/2γ”(0), accounts for the curvature effects, refining the approximation by including the acceleration of the geodesic. Since γ”(0) satisfies the geodesic equation, it inherently reflects the manifold's curvature. Thus, for small h, the exponential map can be locally approximated as:
Exp_p(hv) ≈γ(0) + h γ'(0) + h^2/2γ”(0),
capturing both the initial direction and the curvature of the geodesic path.
In the context of quantum information, particularly for the Quantum Natural Gradient, the manifold of interest is the parameter space of quantum states, where the metric tensor relevant for us is defined by the Fubini-Study metric. This metric provides a Riemannian structure that captures the infinitesimal distance between quantum states, facilitating more efficient optimization in variational quantum algorithms. In the next sections, we review how to apply the Fubini-Study metric to the Quantum Natural Gradient and how to distinguish geodesic corrections to QNG from the perspective of the exponential map.
§ OPTIMIZING IDEALIZED VARIATIONAL QUANTUM CIRCUITS WITH QUANTUM NATURAL GRADIENT
This section provides an overview of VQAs and their optimization, focusing on idealized variational quantum circuits. For a more comprehensive understanding, refer to <cit.> for reviews of VQAs and <cit.> for details on QNG.
Variational quantum circuits are constructed using a family of parameterized unitary transformations. For an n-qubit system, the state space is represented by a Hilbert space of dimension N = 2^n, which can be decomposed as a tensor product of two-dimensional spaces: ℂ^N = (ℂ^2)^⊗ n. The parameterized circuits are typically composed of sequences of unitary transformations:
U(θ) = U_L(θ_L) U_L-1(θ_L-1) ⋯ U_1(θ_1),
where θ_l represents the parameters for the l-th layer. Each unitary transformation U_l(θ_l) can be decomposed as:
U_l(θ_l) = ∏_l e^-i θ_l K_l W_l,
where K_l are Hermitian operators, and W_l are fixed entangling unitary operators acting across the qubits.
The goal of VQE is to minimize a cost function, typically defined as the expectation value of an observable Ô with respect to the quantum state |ψ(θ)⟩ = U(θ) |ψ_0⟩:
ℒ(θ) = ⟨ψ(θ)|Ô|ψ(θ)⟩,
where |ψ_0⟩ is the initial state. The objective is to find the optimal parameters θ^* that minimize ℒ(θ).
The optimization is typically performed using gradient descent, updating the parameters iteratively:
θ_t+1 = θ_t - η ∂_j ℒ(θ_t),
where ∂_j:= ∂/∂θ_j. Equation (<ref>) can be interpreted as an approximation of the solution to an ordinary differential equation (ODE) using the Euler method:
θ̇ = - λ ∂_j ℒ(θ),
where η = h λ is the learning rate, with λ being a time scale constant that affects the speed but not the trajectory of the system, and h is the step size.
However, this ODE is not invariant under reparameterizations of the parameters θ. For instance, if we rescale the parameters θ→ 2θ, the gradient ∂_j ℒ would scale as 1/2∂_j ℒ, leading to inconsistencies in the optimization process.
The core of this issue lies in the differential geometric nature of the gradient. The parameter update θ̇ transforms as a vector in the tangent space T_θℳ, while the gradient ∂_j ℒ is a covector (or 1-form) in the cotangent space T^*_θℳ. Since the ODE in Eq. (<ref>) attempts to relate objects in different spaces with distinct transformation rules, it is not an invariant relation.
QNG alleviates this issue by approximately solving an invariant ODE. The key idea is to raise the index of the gradient using a metric tensor g_ij. By raising the index of ∂_j ℒ on the right-hand side of the gradient descent ODE, the new ODE becomes:
θ̇ = -λ g^ij∂_j ℒ(θ),
which is now a vector in T_θℳ, thereby resolving the type mismatch in Eq. (<ref>).
This new ODE is invariant under reparameterizations, ensuring that the forward Euler approximation:
θ_t+1 = θ_t - η g^ij∂_j ℒ(θ_k),
remains consistent across different parameter spaces. The metric tensor g_ij is recognized as the Fubini-Study metric, derived from the real part of the Quantum Geometric Tensor:
g_i j = Re( ⟨∂_i ψ | ∂_j ψ⟩) - ⟨∂_i ψ | ψ⟩⟨ψ | ∂_j ψ⟩.
It is important to note that the metric tensor in VQAs does not always define a legitimate Riemannian metric, as it is often degenerate, meaning it may not be invertible. To address this issue, regularization techniques such as Tikhonov regularization are applied by adding a small constant multiplied by the identity matrix (λ I) to the metric, ensuring it is well-defined and invertible. Additionally, the computation of this metric tensor becomes increasingly intensive as the number of parameters in the circuit grows. Therefore, approximations, such as block-diagonal or diagonal forms, are crucial for practical applications <cit.>.
Thus, QNG utilizes a regularized approximated, often diagonal or block-diagonal, inverse metric tensor to perform parameter updates that respect the intrinsic geometry of the quantum state space, leading to improved convergence speed and accuracy in optimization within VQAs. In the next section, we explore how this approach can be further refined through higher-order integrators and the inclusion of geodesic corrections.
§ HIGHER-ORDER INTEGRATORS AND GEODESIC CORRECTION
The forward Euler method, commonly used in the QNG, provides only a first-order approximation to the exact solution of the natural gradient ordinary differential equation (ODE). For higher accuracy, higher-order integrators should be employed.
To further refine this approach, we can use the Riemannian Euler method, which leverages the exponential map to update the parameters, ensuring that the updates align with the geometry of the manifold. Using (<ref>), the Riemannian Euler update rule is given by:
γ̇^i_t(h) =θ_t+1 = Exp_θ_t(-h λ g^ij(θ_t) ∂_j ℒ(θ_t)),
where
γ^i_t(0) = θ_t,
γ̇^i_t(0) = -λ g^i j∂_j ℒ(θ_t).
Here, the exponential map Exp translates the current parameter θ_t along the geodesic defined by the natural gradient. This approach is effective because it preserves invariance properties under reparameterization, owing to the characteristics of the exponential map. However, directly computing the exponential map is challenging, as it requires solving the geodesic equation exactly. To approximate this computation, we explore using the first and second-order derivatives.
The first derivatives approximate the geodesic as:
γ^i_t(h) ≈θ_t + h γ̇^i_t(0)
⇒θ_t+1 = θ_t - η g^ij∂_j ℒ.
where the learning rate η = h λ. This approximation corresponds to the naive Quantum Natural Gradient update rule, utilizing only the first-order information. For a more precise approximation, we can incorporate second-order information from the geodesic equation:
γ^i_t(h) ≈θ_t + h γ̇^i_t(0) + 1/2 h^2 γ̈^i_t(0),
where, according to the geodesic equation (<ref>):
γ̈^i_t(0) = - 1/2Γ^i_lmγ̇^l_t(0) γ̇^m_t(0).
The resulting update rule with the geodesic correction is:
θ_t+1 = θ_t + h γ̇^i_t(0) - 1/2 h^2 Γ^i_l mγ̇^l_t(0) γ̇^m_t(0),
We now combine all the relevant equations and heuristically allow the correction term in equation (<ref>) to depend on a tunable parameter b rather than being fixed to η^2. In other words, η^2 is too small to effectively capture the geodesic correction effect. This approach is justified because we use an approximate Fubini-Study metric, such as a diagonal metric, and fully capturing the curvature may require going beyond a second-order approximation in (<ref>). Such higher-order corrections would be computationally intensive. To avoid this complexity, we adjust the correction term flexibly with the parameter b, allowing us to capture the geodesic correction effects without the need for exact higher-order terms, which are computationally demanding.
This leads to the following update rule, incorporating the geodesic correction into the QNG:
θ_t+1 = θ_t - η g^ij(θ_t) ∂_j ℒ(θ_t) - b/2Γ^i_lm(θ_t)
( g^lj(θ_t) ∂_j ℒ(θ_t) )
( g^mj(θ_t) ∂_j ℒ(θ_t) ).
The update rule in equation (<ref>) shows that the correction term depends on the Christoffel symbols of the second kind. In the next section, we will discuss how to compute these symbols efficiently.
§ COMPUTING THE CHRISTOFFEL SYMBOLS WITH THE PARAMETER-SHIFT RULE
The Christoffel symbols, Γ^i_jk, are essential components in the update rule (<ref>), as they capture the curvature of the parameter space defined by the Fubini-Study metric. Traditionally, these symbols are computed by differentiating the Fubini-Study metric (<ref>), which involves first calculating the metric tensor and then deriving the Christoffel symbols through classical differentiation. This approach relies heavily on classical post-processing and does not leverage the quantum device directly.
To enable direct estimation of the Christoffel symbols on a quantum device, we reformulate their computation using the parameter-shift rule specifically adapted to the quantum circuit of interest. This method allows the Christoffel symbols to be directly accessed through quantum measurements, bypassing the need for classical differentiation and making the process more efficient and directly linked to the quantum state preparation. For a detailed overview of the parameter-shift rule and its application in deriving the Fubini-Study metric, please refer to the appendix and the related work in <cit.>.
The Fubini-Study metric for a pure variational quantum state |ψ(θ) ⟩ can be represented as a second-order tensor:
g_j_1 j_2 (θ) = -1/2∂^2/∂θ_j_1∂θ_j_2. | ⟨ψ (θ') | ψ (θ) ⟩|^2 |_θ' = θ
Using the parameter-shift rule, this metric can be reformulated as:
g_j_1 j_2 (θ) = -1/8 [
| ⟨ψ (θ) | ψ (θ + (𝐞_j_1 + 𝐞_j_2) π / 2) ⟩|^2
- | ⟨ψ (θ) | ψ (θ + (𝐞_j_1 - 𝐞_j_2) π / 2) ⟩|^2
- | ⟨ψ (θ) | ψ (θ + (-𝐞_j_1 + 𝐞_j_2) π / 2) ⟩|^2
+ | ⟨ψ (θ) | ψ (θ - (𝐞_j_1 + 𝐞_j_2) π / 2) ⟩|^2
].
where 𝐞_j_1 and 𝐞_j_2 are unit vectors in the parameter space, and the shifted parameter is given by s = π/2.
If the metric tensor simplifies to diagonal elements, we have:
g_j j(θ) = 1/4[ 1 - | ⟨ψ(θ) | ψ(θ + π e_j) ⟩|^2 ].
In this scenario, the Christoffel symbols can be directly computed from the derivatives of the metric tensor using the higher-order parameter-shift rule, enabling their extraction directly from the quantum circuit. This approach bypasses traditional classical computations, allowing for efficient and accurate estimation directly on the quantum hardware. For the diagonal case, the results can be encapsulated in the following proposition:
The Christoffel symbols of the second kind for the diagonal metric (<ref>) are given by:
Γ^i_jk = 1/2g_ii(θ)[1/8δ_ij(
- | ⟨ψ(θ) | ψ(θ + π e_i +π2 e_k) ⟩|^2
+ | ⟨ψ(θ) | ψ(θ + π e_i - π2 e_k) ⟩|^2
)
+1/8δ_ik(
- | ⟨ψ(θ) | ψ(θ + π e_i +π2 e_j) ⟩|^2
+ | ⟨ψ(θ) | ψ(θ + π e_i - π2 e_j) ⟩|^2
)
-1/8δ_jk(
- | ⟨ψ(θ) | ψ(θ + π e_j +π2 e_i) ⟩|^2
+ | ⟨ψ(θ) | ψ(θ + π e_j - π2 e_i) ⟩|^2
)
]
See Appendix <ref>.
§ APPLICATION EXAMPLES
In this section, we present examples to illustrate the practical application of the previously developed theories.
§.§ Example 1 : Single Qubit
In this example, we examine a single qubit scenario. For the VQE, we take the ansatz state defined as:
|ϕ(θ)⟩ = cosθ_0|0⟩ + e^2iθ_1sinθ_0|1⟩ =|ϕ(θ)⟩ = [ cosθ_0; e^2iθ_1sinθ_0 ].
The Hamiltonian for which we want to find the ground state is H = σ_x. Thus, the cost function for the system is:
f(θ) = ⟨ϕ(θ)|H|ϕ(θ)⟩ = sin(2θ_0)cos(2θ_1),
and its gradient vector is:
∂ f(θ)/∂θ = [ 2cos(2θ_0)cos(2θ_1); -2sin(2θ_0)sin(2θ_1) ].
The Fubini-Study metric for this single qubit example, using equation (<ref>), is given by:
F = [ 1 0; 0 sin^2(2θ_0) ].
The Christoffel symbol of this metric, calculated using equation (<ref>), the no zeros terms are:
Γ^0_ 1 1 = - 2 sin(2 θ_0) cos(2 θ_0),
Γ^1_ 0 1 =Γ^1_ 1 0 = 2 cos(2 θ_0)sin(2 θ_0)
Figure <ref> demonstrates the impact of geodesic corrections on optimization performance. The inclusion of geodesic corrections in QNGGC significantly enhances convergence speed (in terms of steps) compared to GD and QNG, as depicted in subplots (a) and (b), where QNGGC rapidly approaches the target energy E^* = -1. Subplot (c) shows the norm of the geodesic correction term, b/2Γ^i_lm(θ_t)
( g^lj(θ_t) ∂_j ℒ(θ_t) )
( g^mj(θ_t) ∂_j ℒ(θ_t) ), which plays a crucial role in guiding the optimization process, particularly in the initial iterations. The subsequent reduction of this norm towards zero indicates that the convergence of the gradient is being effectively managed, leading to more stable updates. Subplot (d) illustrates the optimization paths in the parameter space, where QNGGC provides a more direct and efficient trajectory compared to GD and QNG. This behavior underscores the advantage of leveraging curvature corrections, resulting in more precise and adaptive parameter updates.
§.§ Example 2: Two-Qubit Simulation of the Hydrogen Molecule (H_2)
In this example, we focus on finding the ground state of the hydrogen molecule (H_2) using a two-qubit variational quantum eigensolver (VQE) approach. The Hamiltonian for the system is given by <cit.>:
H = α (σ_z ⊗ I + I ⊗σ_z) + βσ_x ⊗σ_x,
where α = 0.4 and β = 0.2. In this Hamiltonian, σ_z and σ_x are Pauli matrices that represent spin operators acting on individual qubits. This Hamiltonian has four eigenvalues:
h_1 = √(4α^2 + β^2), h_2 = β, h_3 = -β, h_4 = -√(4α^2 + β^2),
with the ground state corresponding to the minimum eigenvalue h_4. The corresponding eigenvector is the ground state, |ψ_min⟩, which is expressed as:
|ψ_min⟩∝ -β|00⟩ + (2α + √(4α^2 + β^2)) |11⟩.
The ansatz for the VQE is designed to approximate this ground state. It is given as follows (refer to Figure <ref> for a visual representation):
|ψ⟩ = (CRY(θ)_q_0, q_1) (CRX(θ)_q_0, q_1) ( R_y(2θ_0) ⊗ R_y(2θ_1) ) |0⟩⊗|0⟩
where R_y(θ) denotes the single-qubit rotation operator defined as R_y(θ) = e^-iθσ_y/2, and the entangling gates act between two qubits, q_0 and q_1, as CRX(θ)_q_0, q_1 = I ⊗ |0⟩⟨ 0| + X ⊗ |1⟩⟨ 1| and CRY(θ)_q_0, q_1 = I ⊗ |0⟩⟨ 0| + R_y(θ) ⊗ |1⟩⟨ 1|. Please refer to Appendix <ref> for details on the cost function and its gradient.
In this example, the Fubini-Study metric F is calculated as:
F = [ 1 0 cos(θ_1) sin(θ_1); 0 1 -cos(θ_0) sin(θ_0); cos(θ_1) sin(θ_1) -cos(θ_0) sin(θ_0) 1/2(1 - cos(2θ_0) cos(2θ_1)) ]
To avoid singularities in F when updating the parameters in the VQE, we apply Tikhonov regularization by adding a small constant λ as a multiple of the identity matrix I. The inverse Fubini-Study metric F^-1 and the Christoffel symbols for this example are provided in Appendix <ref>.
Figure <ref> demonstrates that QNGGC significantly outperforms GD and QNG, rapidly reducing the energy cost and achieving faster convergence to the target energy. The geodesic correction helps QNGGC navigate the parameter space more effectively, avoiding inefficient trajectories and reaching optimal fidelity faster than the other methods.
Figure <ref> uses the same conditions as Figure <ref>, but with a different value of b = 0.4 and initial parameters set to a more challenging configuration, [π/2, π/2, 0]. This demonstrates how the QNGGC optimizer efficiently identifies a path to escape from plateaus, leading to faster convergence to the ground state compared to GD and QNG.
Figure <ref> presents averaged results over 50 runs with random initializations, highlighting QNGGC’s robust performance across varied starting conditions.
§.§ Example 3: Transverse Field Ising Model
To enhance our previous results, we extend our approach to simulations involving larger qubit systems, including 4 and 7 qubits, using the quantum software Qiskit <cit.>. We employ the parameter-shift rule to compute both the gradient and Christoffel symbols. The ansatz used in VQE is the EfficientSU2 ansatz <cit.>, with one repetition as the circuit depth and linear entanglement for simplicity, as illustrated in Figure <ref>. Our objective is to determine the ground state energy of the Transverse Field Ising Model <cit.> under open boundary conditions:
H = -∑_i Z_i Z_i+1 - h ∑_i X_i,
where Z_i and X_i are Pauli operators acting on the i-th qubit, with the transverse field strength fixed at h = 10 for our simulations. This model captures the interplay between nearest-neighbor spin interactions and a transverse magnetic field, driving the system between distinct quantum phases. For smaller values of the transverse field h, the spins tend to align along the z-direction, corresponding to a ferromagnetic ordered phase. As h increases, the system becomes increasingly disordered and transitions into a paramagnetic phase. The critical point at h = 1 marks the quantum phase transition between these phases, where spontaneous symmetry breaking and critical quantum fluctuations occur.
To benchmark the effectiveness of our optimizers, we present in Fig. <ref>, similar to previous examples, log_10(|⟨ H ⟩ - E_0|) to evaluate how quickly and accurately each optimization method converges to the ground state energy. This provides insights into their efficiency and robustness. Parameter tuning was conducted using a grid search to identify the optimal settings for each optimizer.
§ CONCLUSIONS AND OUTLOOK
In this manuscript, we extended the QNG method by incorporating higher-order integration techniques inspired by the Riemannian Euler update rule and a geometric perspective, leading to the development of an update rule based on geodesic equations. We introduced a tunable parameter b to heuristically adjust the effect of the geodesic correction, allowing for flexible integration of higher-order information. Through various examples, we demonstrated that QNGGC offers a more effective optimization strategy compared to traditional QNG by more closely following the direct optimization path. To compute the Christoffel symbols directly from quantum circuits, we proposed a method using the parameter-shift rule, enabling direct evaluation within quantum circuits.
The QNGGC update rule reduces the number of steps needed to reach the target energy but remains time-intensive compared to native QNG due to the computational cost of Christoffel symbol calculations. Currently, QNGGC is mainly applicable to small, shallow quantum circuits.
Future work could explore extending our results to noisy and nonunitary circuits, following approaches such as those presented in <cit.>. Developing more efficient methods for computing Christoffel symbols, such as stochastic parameter-shift rules and simultaneous perturbation stochastic approximation techniques for the Quantum Fisher Information <cit.>, would also be valuable. Additionally, randomness-based methods for Christoffel symbol computation <cit.> represent a promising avenue for further research. These extensions would enhance the broader applicability and efficiency of geodesic corrections in quantum optimization frameworks. Further research will also focus on moving beyond second-order approximations. Our approach could also be applied to time-dependent optimizers, particularly those inspired by the metric tensor and differential geometry.
§ ACKNOWLEDGEMENTS
The author thanks Karl Jansen, Stefan Kühn, Yahui Chai, Yibin Guo, Tim Schwägerl, and Cenk Tüysüz from CQTA, DESY, as well as Tobias Hartung (Northeastern University, London) and Naoki Yamamoto (Keio University, Japan) for their helpful discussions and insightful feedback.
§ PARAMETER-SHIFT RULE AND METRIC TENSOR CALCULATION
This section provides a brief review of the parameter-shift rule and its application in computing both the gradient and the metric tensor for variational quantum algorithms. The parameter-shift rule allows for the direct estimation of gradients and higher-order derivatives on quantum hardware. For further details, we refer the reader to <cit.>.
§.§ Parameter-Shift Rule for Gradient Estimation
The parameter-shift rule provides a method to calculate the derivative of an expectation value with respect to a parameter θ_i in a quantum circuit. Consider an observable Ô and a parameterized quantum state |ψ(θ)⟩; the expectation value of the observable is given by:
⟨Ô⟩(θ) = ⟨ψ(θ) | Ô | ψ(θ) ⟩.
The derivative of this expectation value with respect to the parameter θ_i can be computed using the parameter-shift rule as:
∂_i⟨Ô⟩(θ) = 1/2[⟨Ô⟩(θ + s 𝐞_i) - ⟨Ô⟩(θ - s 𝐞_i)],
where s is the shift in the parameter, typically s = π/2, and 𝐞_i is the unit vector in the direction of θ_i.
§.§ Fubini-Study Metric
To derive the Fubini-Study metric tensor g_ij(θ), we begin by examining the infinitesimal distance between nearby quantum states in the parameter space. The metric tensor captures the local geometric structure of this space, reflecting how the quantum state |ψ(θ)⟩ evolves under small displacements <cit.> <cit.> <cit.>.
We start by considering the normalization condition of the quantum state:
⟨ψ(θ) | ψ(θ) ⟩ = 1.
Differentiating this condition twice with respect to the parameters θ^i and θ^j, we obtain:
⟨ψ(θ) | ∂_i ∂_j ψ(θ) ⟩ + ⟨∂_i ψ(θ) | ∂_j ψ(θ) ⟩ + ⟨∂_j ψ(θ) | ∂_i ψ(θ) ⟩ = 0.
Next, consider the quantum state |ψ(θ + δθ)⟩ near |ψ(θ)⟩, where δθ is the displacement vector. Using a Taylor expansion, we express the shifted state as:
|ψ(θ + δθ)⟩ = |ψ(θ)⟩ + ∂_i |ψ(θ)⟩δθ^i + 1/2∂_i ∂_j |ψ(θ)⟩δθ^i δθ^j + 𝒪(δθ^3).
Taking the inner product of |ψ(θ)⟩ with this expanded state, we get:
⟨ψ(θ) | ψ(θ + δθ) ⟩ = 1 + ⟨ψ(θ) | ∂_i ψ(θ) ⟩δθ^i + 1/2⟨ψ(θ) | ∂_i ∂_j ψ(θ) ⟩δθ^i δθ^j + 𝒪(δθ^3).
The fidelity between the states |ψ(θ)⟩ and |ψ(θ + δθ)⟩ is given by:
|⟨ψ(θ) | ψ(θ + δθ) ⟩|^2 = 1 + ⟨ψ(θ) | ∂_i ψ(θ) ⟩δθ^i + ⟨∂_i ψ(θ) | ψ(θ) ⟩δθ^i
+ 1/2⟨ψ(θ) | ∂_i ∂_j ψ(θ) ⟩δθ^i δθ^j + 𝒪(δθ^3).
Using the normalization and differentiation results, we find the infinitesimal squared distance between the quantum states as:
d^2(P_ψ, P_ψ + δθ) = Re[⟨∂_i ψ | ∂_j ψ⟩ - ⟨∂_i ψ | ψ⟩⟨ψ | ∂_j ψ⟩] δθ^i δθ^j.
This form directly defines the Fubini-Study metric tensor:
g_ij(θ) = Re[⟨∂_i ψ(θ) | ∂_j ψ(θ) ⟩ - ⟨∂_i ψ(θ) | ψ(θ) ⟩⟨ψ(θ) | ∂_j ψ(θ) ⟩].
Thus, the Fubini-Study metric tensor g_ij(θ) quantifies the infinitesimal squared distance between nearby quantum states in the parameter space, providing a geometric interpretation of the state evolution. To express this metric tensor in terms of measurable quantities, we employ the parameter-shift rule, which enables direct computation on quantum hardware. The resulting expressions for the metric tensor components are given in (<ref>) for the full form and (<ref>) for the diagonal approximation.
§.§ Proof of the Proposition
Using the higher-order derivatives strategy with the parameter-shift rule, the differentiation of the metric (<ref>) with respect to θ_j is given by:
∂_k g_j,j(θ) = 1/8[
- | ⟨ψ(θ) | ψ(θ + π𝐞_j + π/2𝐞_k) ⟩|^2
+ | ⟨ψ(θ) | ψ(θ + π𝐞_j - π/2𝐞_k) ⟩|^2
].
Using the definition of the Christoffel symbols (<ref>), for a diagonal metric tensor, the Christoffel symbols are determined by the following specific cases:
1. For i = j = k:
Γ^i_ii = 1/2 g_ii∂ g_ii/∂θ^i,
2. For i = j ≠ k:
Γ^i_ik = 1/2 g_ii∂ g_ii/∂θ^k,
3. For i = k ≠ j:
Γ^i_ij = 1/2 g_ii∂ g_ii/∂θ^j,
4. For j = k ≠ i:
Γ^i_jj = -1/2 g_ii∂ g_jj/∂θ^i,
5. For i ≠ j ≠ k:
Γ^i_jk = 0.
The case i = j = k is ignored as shifting the same parameter twice disrupts distinct mixed derivative calculations.
Using the Kronecker delta notation, the expression can be summarized as:
Γ^i_jk = 1/2 g_ii( δ_ij∂ g_ii/∂θ^k + δ_ik∂ g_ii/∂θ^j - δ_jk∂ g_jj/∂θ^i),
where δ_ij is the Kronecker delta, defined as:
δ_ij =
1 if i = j,
0 if i ≠ j.
By substituting equation (<ref>) into equation (<ref>), we obtain the final expression for the Christoffel symbols for the diagonal metric (<ref>).
□
§ DETAIL CALCULATION FOR EXAMPLE 2
In this appendix, we provide the detailed expressions for the inverse Fubini-Study metric and the Christoffel symbols for Example 2, which simulates the Hydrogen molecule (H_2).
§.§ Inverse Fubini-Study Metric
The inverse Fubini-Study metric F^-1, after applying Tikhonov regularization, is given by:
§.§ Christoffel Symbols
The Christoffel symbols Γ^i_jk for this example are given by:
§.§ Ansatz, Cost Function, and Gradient
The ansatz given in equation (<ref>) can be expanded as follows:
|ψ⟩ = (cos(θ_0) cos(θ_1))|00⟩
+ (cos(θ_0) cos(θ_2) sin(θ_1) - cos(θ_1) sin(θ_0) sin(θ_2))|01⟩
+ (sin(θ_0) sin(θ_1))|10⟩
+ (cos(θ_1) cos(θ_2) sin(θ_0) + cos(θ_0) sin(θ_1) sin(θ_2))|11⟩.
Using this ansatz and the hydrogen, H_2, Hamiltonian, we derive the following cost function:
ℒ(θ_0, θ_1, θ_2, θ_3) = 2 cos^2(θ_0) (-αsin^2(θ_1) sin^2(θ_2) + αcos^2(θ_1) + βsin(θ_1) cos(θ_1) sin(θ_2))
- 2 sin^2(θ_0) cos(θ_1) (αcos(θ_1) cos^2(θ_2) + βsin(θ_1) sin(θ_2))
+ 2 sin(θ_0) cos(θ_0) cos(θ_2) (β - 2 αsin(θ_1) cos(θ_1) sin(θ_2))
The gradient of the cost function with respect to the parametersθ_0, θ_1, and θ_2 is expressed as:
∂ℒ/∂θ_0 = -sin(2θ_0) (α + 2αcos(2θ_1) + αcos(2θ_2) + 2βsin(2θ_1) sin(θ_2))
+ cos(2θ_0) (2βcos(θ_2) - αsin(2θ_1) sin(2θ_2)),
∂ℒ/∂θ_1 = cos(2θ_0) (-2αsin(2θ_1) + 2βcos(2θ_1) sin(θ_2))
- α(2 sin(2θ_1) sin^2(θ_2) + cos(2θ_1) sin(2θ_0) sin(2θ_2)),
∂ℒ/∂θ_2 = βcos(2θ_0) cos(θ_2) sin(2θ_1)
- sin(2θ_0) (αcos(2θ_2) sin(2θ_1) + βsin(θ_2)) + α(-cos(2θ_0) + cos(2θ_1)) sin(2θ_2).
99
Cerezo2021
M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, et al., “Variational Quantum Algorithms,” Nature Reviews Physics, 3(9), 625-644 (2021). <https://doi.org/10.1038/s42254-021-00348-9>.
McClean2016
J. R. McClean, J. Romero, R. Babbush and A. Aspuru-Guzik, “The theory of variational hybrid quantum-classical algorithms,” New Journal of Physics, 18(2), 023023 (2016). <https://doi.org/10.1088/1367-2630/18/2/023023>.
Bharti2022
K. Bharti, A. Cervera-Lierta, T. H. Kyaw, T. Haug, S. Alperin-Lea, A. Anand, et al., “Noisy intermediate-scale quantum algorithms,” Reviews of Modern Physics, 94(1), 015004(69) (2022). <https://doi.org/10.1103/RevModPhys.94.015004>.
Peruzzo2014
A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O'Brien, “A variational eigenvalue solver on a photonic quantum processor,” Nature Communications 5, 4213 (2014). <https://www.nature.com/articles/ncomms5213>doi:10.1038/ncomms5213.
karl
A. Di Meglio, et al., “Quantum Computing for High-Energy Physics: State of the Art and Challenges,” PRX Quantum 5, 037001 (2024). <https://doi.org/10.1103/PRXQuantum.5.037001>doi:10.1103/PRXQuantum.5.037001.
Stokes2020
J. Stokes, J. Izaac, N. Killoran, and G. Carleo, “Quantum Natural Gradient,” Quantum 4, 269 (2020). <https://doi.org/10.22331/q-2020-05-25-269>.
Amari1998
S.-I. Amari, “Natural Gradient Works Efficiently in Learning,” Neural Computation 10 (2), 251-276 (1998). <https://doi.org/10.1162/089976698300017746>
Meyer
J. Jakob Meyer,, “Fisher Information in Noisy Intermediate-Scale Quantum Applications
,” Quantum 5, 539 (2021). <https://doi.org/10.22331/q-2021-09-09-539>.
Katabarwa2022
A. Katabarwa, S. Sim, D. E. Koh, and P.-L. Dallaire-Demers, “Connecting geometry and performance of two-qubit parameterized quantum circuits,” Quantum 6, 782 (2022). <https://doi.org/10.22331/q-2022-08-23-782>.
Koczor
B. Koczor and C. Benjami, “Quantum natural gradient generalized to noisy and nonunitary circuits,” Phys. Rev. A 106, 062416 (2022). <https://doi.org/10.1103/PhysRevA.106.062416>.
Gacon
J. Gacon, C. Zoufal, G. Carleo and S. Woerner, “Simultaneous Perturbation Stochastic Approximation of the Quantum Fisher Information
,” Quantum 5, 567 (2021). <https://doi.org/10.22331/q-2021-10-20-567>.
Kolotouros
I. Kolotouros and P. Wallden, “Random Natural Gradient
,” arXiv:2311.04135 (2023). <https://doi.org/10.48550/arXiv.2311.04135>.
Kandala2017
A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, “Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets,” Nature 549, 242-246 (2017). <https://doi.org/10.1038/nature23879>.
Sim2019Advanced
S. Sim, P. D. Johnson, and A. Aspuru-Guzik, “Expressibility and Entangling Capability of Parameterized Quantum Circuits for Hybrid Quantum-Classical Algorithms,” Advanced Quantum Technologies 2, 1900070 (2019). <https://doi.org/10.1002/qute.201900070>.
Tobias
L. Funcke, T. Hartung, K. Jansen, S. Kühn, and P. Stornati, “Dimensional Expressivity Analysis of Parametric Quantum Circuits,” Quantum 5, 422 (2021). <https://doi.org/10.22331/q-2021-03-29-422>.
Lee2013
J. M. Lee, Introduction to Smooth Manifolds, 2nd ed., Springer, 2012. <https://link.springer.com/book/10.1007/978-1-4419-9982-5>.
Wald1984
R. M. Wald, General Relativity, University of Chicago Press, 1984. <https://doi.org/10.7208/chicago/9780226870373.001.0001>
Frankel2011
T. Frankel, The Geometry of Physics: An Introduction, 3rd ed., Cambridge University Press, 2011. <https://doi.org/10.1017/CBO9781139061377>
Transtrum2011
M. K. Transtrum, B. B. Machta, and J. P. Sethna, “Geometry of nonlinear least squares with applications to sloppy models and optimization, ” Physical Review Letters E 83, 036701 (2011). <http://dx.doi.org/10.1103/PhysRevE.83.036701>
Transtrum2012
M. K. Transtrum and J. P. Sethna, “Geodesic acceleration and the small-curvature approximation for nonlinear least squares,” arXiv:1207.4999 (2012). <https://doi.org/10.48550/arXiv.1207.4999>
Song2018
Y. Song, J. Song, and S. Ermon, “Accelerating Natural Gradient with Higher-Order Invariance,” Proceedings of the 35th International Conference on Machine Learning (ICML), Stockholm, Sweden, PMLR 80 (2018). <https://doi.org/10.48550/arXiv.1803.01273>
Yamamoto
N. Yamamoto, “On the natural gradient for variational quantum eigensolver,” arXiv:1909.05074 (2019). <https://doi.org/10.48550/arXiv.1909.05074>
BravoPrieto2020
C. Bravo-Prieto, J. Lumbreras-Zarapico, L. Tagliacozzo, and J. I. Latorre, “Scaling of variational quantum circuit depth for condensed matter systems,” Quantum 4, 272 (2020). <https://doi.org/10.22331/q-2020-05-28-272>
Mitarai
K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, “Quantum circuit learning,” Phys. Rev. A 98, 032309 (2018). <https://doi.org/10.1103/PhysRevA.98.032309>
Schuld
M. Schuld, Ville Bergholm, C. Gogolin, J. Izaac, and N. Killoran, “Quantum circuit learning,” Phys. Rev. A 99, 032331 (2018). <https://doi.org/10.1103/PhysRevA.99.032331>
Mari
A. Mari, T. R. Bromley, and N. Killoran, “Estimating the gradient and higher-order derivatives on quantum hardware,” Phys. Rev. A 103, 012405 (2021). <https://doi.org/10.1103/PhysRevA.103.012405>
Wierichs
D. Wierichs, J. Izaac, C. Wang, and C. Yen-Yu Lin, “General parameter-shift rules for quantum gradients,” Quantum 6, 677 (2022). <https://doi.org/10.22331/q-2022-03-30-677>.
sun
R. Wiersema, D. Lewis, D. Wierichs, J. Carrasquilla, and N. Killoran, “Here comes the SU(N): multivariate quantum gates and gradients,” Quantum 8, 1275 (2024). <https://doi.org/10.22331/q-2024-03-07-1275>.
Riemannianflow
R. Wiersema and N. Killoran, “Optimizing quantum circuits with Riemannian gradient flow,” Phys. Rev. A 107, 062421 (2023). <https://doi.org/10.1103/PhysRevA.107.062421>.
Qiskit
H. Abraham, A. Akhalwaya, G. Aleksandrowicz, et al., “Qiskit: An Open-source Framework for Quantum Computing,” 2024. <https://qiskit.org>.
Mathematica
Wolfram Research, Inc., “Mathematica, Version 14.1,” Champaign, IL, 2024. <https://www.wolfram.com/mathematica/>.
Provost1980
J. P. Provost and G. Vallee, “Riemannian Structure on Manifolds of Quantum States,” Commun. Math. Phys. 76, 289-301 (1980). <https://doi.org/10.1007/BF02193559>.
DowlingNielsen2006
M. R. Dowling and M. A. Nielsen, “The geometry of quantum computation,” arXiv preprint arXiv:quant-ph/0701004, 2006. <https://doi.org/10.48550/arXiv.quant-ph/0701004>.
Haug2021
T. Haug, K. Bharti, and M. S. Kim, “Capacity and Quantum Geometry of Parametrized Quantum Circuits,” PRX Quantum 2, 040309 (2021). <https://doi.org/10.1103/PRXQuantum.2.040309>.
Heydari2015
H. Heydari, “Geometric formulation of quantum mechanics,” arXiv preprint arXiv:1503.00238, 2015. <https://doi.org/10.48550/arXiv.1503.00238>.
|
http://arxiv.org/abs/2409.02632v1 | 20240904115126 | Evaluating Environments Using Exploratory Agents | [
"Bobby Khaleque",
"Mike Cook",
"Jeremy Gow"
] | cs.AI | [
"cs.AI",
"cs.HC"
] |
2022
Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0
International (CC BY 4.0).
Work in Progress
1]Bobby Khaleque
[
orcid=0009-0000-3039-4694,
[email protected],
url=https://yamadharma.github.io/,
]
[1]
[1]
[1]Queen Mary University of London
[2]Kings College London
2]Mike Cook
[
orcid=0000-0001-5898-9884,
[email protected],
url=https://kmitd.github.io/ilaria/,
]
[1]
3]Jeremy Gow[
orcid=0009-0004-2768-6898,
[email protected],
url=http://conceptbase.sourceforge.net/mjf/,
]
[3]Queen Mary University of London
§ ABSTRACT
Exploration is a key part of many video games. We investigate the using an exploratory agent to provide feedback on the design of procedurally generated game levels, 5 engaging levels and 5 unengaging levels. We expand upon a framework introduced in previous research which models motivations for exploration and introduce a fitness function for evaluating an environment's potential for exploration. Our study showed that our exploratory agent can clearly distinguish between engaging and unengaging levels. The findings suggest that our agent has the potential to serve as an effective tool for assessing procedurally generated levels, in terms of exploration. This work contributes to the growing field of AI-driven game design by offering new insights into how game environments can be evaluated and optimised for player exploration.
Procedural Content Generation Evaluation of Generated Content AI Agents Exploration
Evaluating Environments Using Exploratory Agents
[
September 9, 2024
================================================
§ INTRODUCTION
Exploration in video game environments, is an area of study to understand what constitutes engaging and immersive experiences for players. The process of navigating through these virtual environments is often driven by the design of the levels themselves, which can either encourage or hinder exploration. As level designers strive to create more compelling and interactive worlds, understanding the factors that contribute to a good exploratory experience becomes increasingly important.
This study seeks to address the question of what makes certain game levels more conducive to exploration than others. Specifically, it investigates whether levels generated by 2 different procedural content generators (generator A and B) in their ability to facilitate exploration. Generator A is designed to produce levels that are generally engaging, with a balanced navigable area and a variety of interactive elements, while Generator B generates levels that might be considered unengaging, characterised by their lack of objects and object placement throughout a level. Both generators use Wave Function Collapse (WFC) to generate levels. More information about WFC can be found in <ref>.
To evaluate the effectiveness of these generators, an exploratory agent, modelled after those used in our previous study <cit.>, was employed. The agent's behaviour was analysed based on several key metrics: environment coverage, inspection of unique objects, a custom novelty measure, entropy of the agent’s path and average motivation experienced by the agent on its path. These metrics were chosen to quantify the quality of exploration in a way that balances the novelty and familiarity of the environment, the diversity of pathways, the unpredictability of the agent’s movements, and if the agent is actually finding motivation to explore the environment.
We introduced the concept of an exploratory agent as "a type of agent which traverses a level and explores it in accordance to it's features. It surveys an environment, to observe which features are available in the level, and moves in the direction towards the closest interesting target(s) or direction(s).".
Exploratory agents have previously been used to evaluate levels in generative systems. For example, in Stahkle et al.'s PathOS framework <cit.> for assisting designers in level and world design. Cook also investigated evaluating levels with agents that used a vision-based approach <cit.>, although their project was abandoned.
Our experiment focuses on the role that exploratory agents can play as part of a PCG framework for differentiating between levels generated through various algorithms, and assessing their suitability for exploration. The key question this study tries to answer is whether exploratory agents are an effective means of filtering procedurally generated levels for exploratory experiences. These agents, by being given possible motivations for exploration, make it possible for developers to get valuable information about the quality and the engagement a level may provide, hence offering a systematic way of optimising PCG processes.
In this paper, we will evaluate the quality of the levels generated with respect to exploratory behaviors of agents by running a number of experiments; these experiments use coverage, entropy, and novelty metrics to see how well these levels support exploration. The long-term aim of the work is to demonstrate that EAs could be both reliable and efficient in filtering and improving the quality of procedurally generated content to ensure that the generated levels are varied, engaging, and support player exploration.
§ BACKGROUND
§.§ Curiosity Based Exploration
Pathak et al <cit.> provide an investigation of curiosity-driven learning in artificial agents, with a particular focus on agents operating without any external rewards. The presented work belongs to the area of autonomous agents that learn to control their behaviour in various simulated environments including games and physics simulations, purely driven by intrinsic motivation opertationalised through curiosity. The authors investigated the use of different feature spaces in estimating the prediction error and established that, while random features are enough in some cases, learned features can offer better generalisation. The research pointed to the possible failure of prediction-based rewards, more precisely in the stochastic case, and suggested that additional research on the efficient handling of such environments should be conducted.
Pathak et al's techniques are different from ours because our agent is not meant to be general in the sense that it would explore many different environments using intrinsic motivations. Our agent is given motivations, like the agent in our previous work <cit.>, and it will explore in different ways in different environments.
§.§ Agents to Assist Game Design
Stahlke et al <cit.> introduces PathOS, which predicts player navigation in games that are digital.
Level and world design can be improved by the collected data of how the agents navigate the world. The system was aimed at reducing the burden of playtesting, offering accessibility to devs, being easy to use for designers, and having generalisability. These are similar to the goals we propose with our Exploratory Agents.
A study with 10 participants was carried out using the system, by applying and assessing it; the participants were pre-interviewed, introduced to the system and assigned to create two levels by using the system.
The authors reported that in the post-task interview, impressions of the system as a design tool were quite positive, although the participants mentioned that the behaviour of the agents was very different from how a player would behave.
Nova et al <cit.> present PathOS+ as an extension of the basic PathOS framework to complement expert assessments through the simulation of player data by AI. In this way, the problem stated by this strategy aims to deal with the subjectivity of expert assessments by using objective simulated player behaviour data. This would further help improve the reliability of expert assessments among games user research. These are exemplified through the potential of PathOS+, by applying it in a gameplay analysis for navigation and player behaviour.
Furthermore, we <cit.> explored the use of exploratory agents as a method for evaluating hand made game levels based off popular exploratory experiences, particularly in how well these levels support exploration. They pruposed a framework where agents are given motivations in the form of metrics to model motivations for exploration. As we do in this research project. The study showed that different combinations of metrics resulted in distinct exploratory behaviors, which aligns with expectations based on the design of the levels being tested. The researchers demonstrated that such agents could provide valuable feedback for level designers, potentially evaluating and guiding a generative process to create more engaging and exploratory-rich environments.
§.§ Wave function Collapse
The WFC algorithm, initially inspired by quantum mechanics, has become a prominent tool in the field of procedural content generation (PCG). This algorithm, first used to generate tilemaps, introduced by Maxim Gumin in 2016 [https://github.com/mxgmn/WaveFunctionCollapse], operates on the principle of constraint solving and is primarily used to generate patterns or textures that resemble a given input, ensuring that the output adheres to the rules derived from the input data. The WFC algorithm functions by taking an initial grid where each cell can adopt multiple states, much like the quantum superposition, and systematically collapsing these possibilities based on local constraints until a consistent pattern is formed.
WFC, in particular, has gained immense popularity within PCG because of its unique ability to produce coherent and intricate structures from inputs that are relatively simple. This makes it highly relevant in many applications, particularly in game level generation and general scene generation both in 2D and 3D spaces. It excels at the creation of content that remains structurally logical and is thus very suitable when developers need creativity but also coherence in their PCG.
WFC has been applied to a number of Game Development cases in order to automate creation of complex environments with huge variability, hence drastically reducing the burden of manual level design. For instance, it has been used to generate tiling patterns for textures, layouts for dungeon-like environments, and even the infrastructures of virtual worlds. The flexibility provided by WFC can accommodate all these very different content generation scales, small detailed textures versus expansive game worlds, by adjusting the input parameters and the size of the grid used during the generation process.
WFC has been used to generate 3D levels. For example in Bad North [https://www.badnorth.com/] and by Kleineberg [https://marian42.itch.io/wfc]. BorisTheBrave also provides a Unity3D package of which allows 3D generation of levels using WFC called Tessera [https://assetstore.unity.com/packages/tools/level-design/tessera-procedural-tile-based-generator-155425], which we have decided to use in this research project.
WFC is a powerful and efficient way to implement content creation automation within a videogame. This helps in generating large amounts of content in little time. For this reason, we have chosen to have our generators use WFC.
§ AGENT METRICS
We <cit.> introduced object and direction based metrics in their framework. We use the same versions of these metrics. Some of them were modified. In this section, we give a brief overview of each metric we used and our modifications.
§.§ Direction-Based Metrics
Elevation change: This metric checks if the given direction hits any point in the terrain. If the terrain hit point is higher than the agent's y position (depending on how much higher the hit point is) a maximum value of 1 is given. 0.1 is added until the max of 1 is reached for every unit the terrain hit point is above the agent y. This metric was not modified from our previous version.
Openness: Takes a direction and measures how "open" it is by raycasting checking if there are any objects within a certain distance. Though, boundless space, a raycast hitting nothing is given a score of 0. We previously returned a value of 1 if a raycast hit nothing. However, from a perceptual and gameplay perspective, an environment with no perceivable boundaries might not actually encourage exploration, as there is no clear structure or point of interest to guide the agent’s movement. This can create a sense of aimlessness rather than promoting exploratory behavior.
We have modified this metric to take into account how far an object is, according to the raycast, if the object is as far as the length of view, then 1 is returned otherwise, a fraction representing the percentage of how far the object is in proportion to the view distance (between 0 and 1) is returned.
We have decided to omit the light and shadow, and anticipation direction metrics used by us. This is because measuring varying light intensities was not a goal when generating our levels, and anticipation direction is essentially the same metric as Anticipation object detection, and we decided to go with the object detection version (to which we have renamed Anticipation detection).
§.§ Object-Based Metrics
Anticipation Detection: Is given an object and checks the umbra and penumbra size of the object. It returns a maximum value of 1 and minimum value of 0. This metric was not modified from our previous version.
Large Object Detection: compares any object with the biggest one it had seen in the course of its run. It returns a value between 0 and 1, indicating how big, in percentage terms, an observed object is relative to the biggest one our agent had ever seen. In case the object is larger than the largest seen so far, then 1 is returned and the largest object seen so far is updated to the most recently seen object. This measure was not modified from our previous version.
Group Detection: Takes an object and checks if there are any other objects in a certain radius (in this case 40 units) of that object. Each object that is close adds a 0.1 to the score, to a maximum of 1. This metric was not modified from our previous version, apart from increasing the radius to check for other objects to account for the size of the assets in our experiment.
We have decided to omit the simple detection metric used in our previous work. We felt this was an overly simplistic metric which did not measure any object properties, as the other metrics do.
§ EXPLORATORY AGENT FRAMEWORK
Our agent framework is very similar to the framework used by in our previous study <cit.>. It uses a system similar to context steering <cit.>, in which context maps are formed for each measured direction (36 in total). A context map is a projection of the decision space of the entity onto a 1D array.
Like our previous study's agents there are multiple adjustable parameters:
Length of View: The maximum distance the agent can observe in units
Field of View: The maximum angle of which the agent can observe, independent of the camera attached
Decision Time: The time step to recalculate the interest map and direction to move in for the agent.
In this framework, interest maps are formed from a list of objects which are in view of the agent. A camera is used to detect which objects are in view and only samples directions within view of its camera. There are 36 directions sampled in total. The highest scoring direction is chosen to be moved in.
There are three main stages to the pipeline, explained below.
Stage 1: Selecting a subset of objects A camera is used to survey the surrounding area of the agents. Every object within the camera frustrum is added to a list representing the objects of interest. The output from this stage is a list of objects of interest.
Stage 2: Making Interest Judgments The list from stage 1 is taken and an interest map, consisting of a score associated with a direction is formed. For each direction, a direction based metric is applied to calculate the directions interest score. Also, object based metrics are applied, each object has it's direction taken and rounded to the closest direction in the direction interest map (and added to the direction interest map) before the direction score is updated.
Stage 3: Making a Navigation Decision The direction map of Stage 2 is used to make a navigation decision. The direction of highest interest is chosen. If there is an object which is associated with the highest scoring direction, that is chosen to be navigated towards using A* pathfinding [https://www.arongranberg.com/astar/]. Objects are only associated with directions when an object based metric is being used. A direction multiplied by 50 steps is chosen to be moved in. In our previous work <cit.> use the unity navmesh system and move 10 steps in the direction of the highest scoring direction we chose to make these changes because moving in larger steps, the agent commits more strongly to the direction identified as the most promising based on its internal metrics. This commitment ensures that the agent fully explores the opportunities presented by high-scoring directions, maximizing the benefits of moving towards areas that offer the greatest potential for novelty, object interaction, or other desirable outcomes. If there are multiple directions that are scored as the highest, a random one is chosen.
§ GENERATOR DETAILS
As mentioned before, our generators use WFC to generate level. We made these generators in the Unity game engine [https://unity.com/], using Tessera [https://assetstore.unity.com/packages/tools/level-design/tessera-procedural-tile-based-generator-155425]. Details of each generator are given in the following subsection.
Each generator has 35 tiles, each of these 35 tiles have a chance (a float between 0 and 1) of being spawned. Each generator can generate a level of 350x350 units, each tile is 50 x 50, so 49 tiles can be generated to form a level. Each of these tiles can have 4 x rotation(0, 90, 180 and 270). The theoretical number of possible levels is (35x4)49 levels for both of these generators.
However, because each tile has a probability associated with it, the effective expressive range is influenced by these probabilities. This means that the actual number of levels that can be meaningfully generated might be less.
Generator A uses slightly higher probabilities for decorated tiles and tiles with elevation/slopes, increasing the chance for a large variety of objects in our engaging levels.
Generator B uses slightly lower probabilities for decorated tiles and tiles with elevation/slopes, decreasing the chance for a large variety of objects in our engaging levels, leading to emptier levels/levels decorated with the same types of objects and/or less elevated positions. Generator A and B both use the same tileset.
Exploratory agents are capable of assessing how the spatial arrangement and layout of the levels influence exploration. This includes the distribution of objects which might attract players to explore, or the complexity of paths. Differentiating levels through other means, e.g. simply object counting, does not account for these spatial relationships, which are crucial for understanding the navigability and engagement level of a game environment. So, even though generator B, on average, produces levels with fewer objects, simple methods to determine whether a level is engaging or not would not necessarily be effective.
In addition, simpler methods of evaluating levels provide a raw measure of quantity but do not account for the spatial distribution, contextual significance, or interactive potential of these objects. Such evaluation criteria are limited in their ability to reflect the complexity of the player experience, where placement, accessibility, and interaction opportunities within the level are crucial determinants of exploration quality. Exploratory agents, on the other hand, offer a dynamic evaluation of levels by simulating motivations for exploration, allowing them to assess not just what is present in the environment, but how it might be experienced during exploration.
§ EVALUATING GENERATED LEVELS
In order to demonstrate how well an environment might support exploration, we decided to use the evaluation criteria we used previously (Coverage, object inspection and novelty) as well as expand on it by adding our own. Our additions to the evaluation criteria include; Entropy, modifications to the novelty measure and measuring agent motivation over time. Also, like our previous work, we measured agent trajectory for each metric and spawnpoint [https://github.com/BKhaleque/Evaluating-Environments-using-Exploratory-Agents]. Using the mentioned evaluation criteria, we created a fitness function to give a score (between 0 and 1) for each level.
§.§ Coverage
Coverage serves as a measure of inspective exploration, as described in our previous work <cit.>, we used the same technique to derive coverage as they did. Simply by counting how many of the 50x50 regions the agent had visited with their respective metrics.
We expect less coverage on average in our unengaging levels than in our engaging levels. Our unengaging levels lack a lot of the stimuli that drive an agent to explore the environment fully. Engaging levels often have more of these interactive elements, such as a wider variety of objects and more large objects, which motivate the agent to traverse the entire space. In contrast, unengaging levels are more repetitive, offering little to no reward for thorough exploration. As a result, the agent may not cover a lot of the area of the unengaging levels, leading to reduced coverage.
§.§ Inspection
Another measure of inspective exploration. This was a measure (in terms of percentage) of how many objects were seen and visited by the agent. We also used the same technique as Khaleqe et al. We measured the percentage of objects the agent came within 10 units of. A higher inspection score suggests that the agent had a "want" to learn about the objects in the environment whereas a lower one suggests the opposite.
We expect our agent to have a lower inspection score in our engaging levels than in our unengaging levels. Our unengaging levels lack a lot of the diversity and complexity found in engaging levels, which can lead to a higher concentration of the few available objects in the agent’s field of view. Since there are fewer distinctive or appealing areas to explore, the agent might spend more time interacting with the objects it encounters, leading to a higher object inspection score. In contrast, in engaging levels, the agent might be more drawn to explore the environment as a whole rather than focusing on individual objects.
§.§ Entropy
This is a measure of Shannon entropy <cit.>. Shannon entropy, a foundational concept in information theory introduced by Claude Shannon in 1948, quantifies the uncertainty or unpredictability of a random variable. For a discrete random variable X with possible outcomes {x_1, x_2, …, x_n} occurring with probabilities {p_1, p_2, …, p_n}, the Shannon entropy H(X) is defined as:
H(X) = -∑_i=1^n p(x_i) log_2 p(x_i)
Here:
* p(x_i) is the probability of outcome x_i,
* log_2 p(x_i) is the logarithm of the probability in base 2, giving the entropy in bits.
Shannon entropy achieves its maximum value when all outcomes are equally likely, which corresponds to maximum uncertainty. Conversely, if one outcome is certain (i.e., p(x_i) = 1 for some i and 0 for others), the entropy becomes zero, indicating no uncertainty.
In our study, Shannon entropy is used to measure the diversity and unpredictability of the agent’s exploration path within the generated levels. The core reason for using Shannon entropy in this context is its ability to quantify the randomness of the agent's movements, which reflects the variety of choices the agent makes during exploration.
By calculating the entropy over the grid locations visited by the agent, we can determine whether the agent's path was too predictable (low entropy) or too random (high entropy). An ideal exploration path strikes a balance, showing neither excessive randomness nor predictability, which is crucial for maintaining engagement in exploratory experiences.
We expect our engaging levels to have less entropy, on average, than our unengaging levels. This is because our engaging levels have more meaningful object placements with a wider spread and a variety of objects that we expect will attract the agent's attention with any given metric causing more focused exploration in the engaging levels reducing the randomness of the agent's path.
§.§ Novelty
We used novelty measured over each time step to help evaluate the generated environments. Novelty is a measure of the stimuli experienced by the agent in it's path. We use a very similar measure to our previous work, though instead of measuring the novelty at each 50x50 region, we measure the novelty experienced by the agent at each time step (1 second).
The novelty score can be explained as follows:
* N_t represent the novelty score at time t for a given type of object.
* S_t represent the total novelty score t.
* Δ t represent the time interval, where Δ t = 0.1 seconds.
* r represent the rate of novelty score recovery, where r = 0.03 per second.
* M represent the maximum novelty score an object type can recover to, where M = 0.1.
* P represent the penalty applied to the novelty score when an object type is seen, where P = 0.1.
* v_t represent the visibility flag at time t, where v_t = 1 if the object type is seen and v_t = 0 otherwise.
§ INITIAL CONDITION
When a type of object is encountered for the first time:
N_0 = M = 0.1
and it is marked as "seen".
§ NOVELTY SCORE UPDATE
The novelty score at time t is updated as follows:
* If the object type is not seen (v_t = 0):
N_t + Δ t = min(N_t + r ·Δ t, M)
* If the object type is seen (v_t = 1) and it is "new":
N_t + Δ t = N_t - P
* If the object type is seen (v_t = 1) and it is not "new":
N_t + Δ t = N_t + r ·Δ t
When the object type is seen at time t, the total novelty score is updated:
S_t = S_t + N_t
* The novelty score N_t for an object type starts at 0.1 when the object is first seen.
* Each time step Δ t = 0.1 seconds, the score either recovers at 0.01 per second (if the object type is not seen), or is penalized by 0.1 (if it is seen for the first time).
* The tile score S_t accumulates the novelty score N_t of each object type seen at that time.
We expect novelty to be lower in our unengaging levels due to a lack of diverse elements and a more simplisitic level design. When the environment lacks diversity, the agent quickly becomes familiar with the surroundings, leading to a decrease in perceived novelty.
§.§ Motivation
Motivation is a measure of the highest scoring direction, according to the agent's attached metrics, at a given time step (1 second). The motivation metric serves as an indirect measure of how well the environment supports exploration. If the agent consistently finds high motivation, it suggests that those areas are well designed to encourage exploration. Conversely, if the agent's motivation decreases, it may indicate that the environment lacks sufficient stimuli or that the design is too predictable, leading to disengagement.
§.§ Fitness Function for Evaluating Levels
To evaluate the exploratory potential of generated levels, we designed a fitness function that integrates all the evaluation criteria mentioned above, each reflecting different aspects of the agent's exploration. The fitness function is structured as follows:
F is the overall fitness score of a level, calculated as:
F = ∑ w · f_m
where:
* w is the weight given to a respective metric
* f_m is the fitness score for the metric.
The fitness score for each metric is determined by the following criteria:
* Coverage : The fitness score f_m is set to 0 if the average coverage over all spawns falls outside the range of 20% to 80%. If the coverage is within this range, f_c is multiplied by the product of average motivation (M_avg) and average novelty (N_avg). Ensuring that coverage is between 20% and 80% prevents the extremes where too little exploration indicates a sparse or uninteresting level, and too much coverage might suggest that the level is too small or lacks sufficient depth to sustain interest.
* Entropy: The fitness score f_m is set to 0 if the average entropy over all spawns exceeds 0.9. Limiting entropy to 0.9 ensures that the agent's exploration is too chaotic. High entropy might indicate that the level design is overly complex or lacks clear direction, which could detract from the player experience.
* Object Inspection: The fitness score f_i is set to 0 if the average object inspection over all spawns exceeds 80%. If the inspection rate is 80% or lower and greater than 10%, f_i is multiplied by M_avg· N_avg. Capping object inspection at 80% prevents the agent from being overly focused on objects, which might indicate that the level is too cluttered or lacks broader exploratory opportunities and having at least 10% of objects inspected makes sure there are at least some objects that are worth being investigated. This balance ensures that the level is engaging both at the micro and macro levels.
* Motivation and Novelty: Both average motivation over all spawns and average novelty over all spawns are directly factored into the multiplication to emphasize the importance of these metrics in evaluating the exploratory potential of the level. Multiplying each metric's score by the average motivation and novelty ensures that the fitness function favors levels that actively encourage exploration and offer new experiences. High motivation suggests that the level engages the agent effectively, while high novelty indicates that the level provides new stimuli and avoids repetition.
The final fitness score F therefore balances the contributions of individual exploration metrics with the comprehensive assessment provided by the agent loaded with all metrics. The thresholds and multipliers ensure that levels only receive a high fitness score if they meet essential criteria for meaningful exploration, such as balanced coverage, manageable entropy and appropriate object inspection.
§ EXPERIMENT SETUP
To evaluate procedurally generated environments using exploratory agents, we conducted a study with our agent exploring 5 levels generated from generator A, considered engaging and 5 levels from generator B, considered unengaging. We looked at trajectories of the agent using the above metrics as well as an agent with a combination of all the metrics previously mentioned to gather data on how multiple metrics explore both engaging and unengaging levels. All levels were the same size (350x350) units.
We ran our agent with each metric for 3 minutes at 3 different spawn points for each of the levels. We also had a random control agent for which we measured coverage, novelty, entropy and object inspection for. We did not measure motivation for this agent as it didn't have the capability to have a metric loaded onto it.
The limited length and field of view does mean the spawn point will likely greatly affect the agent paths; to obtain a broader sample, we tested 3 different spawn points on 3 levels.
In our experiment, we tested all singular metrics before loading an agent with every combined metric. This approach was taken to ensure a comprehensive understanding of how each metric influences the agent's behaviour and the overall exploration process.
By first testing each metric individually, we observed the specific influence each one had on the agent and how it influenced the agents decision-making and exploration patterns.
We decided to load an agent with all metrics simultaneously to create a comprehensive and balanced exploratory agent that we thought might effectively navigate complex, procedurally generated environments and provide the most useful data in identifying engaging and unengaging levels.
For the experiment the following agent parameters were set to the values described below:
Length of View: 115. Previously we set this to 80. Through preliminary testing of our agents we found this was too low of a value to observe many of the objects in these types of generated levels. 115 was a much better value in this regard
Field of View: 90. This was observed to be an appropriate value. Like our previous study, we thought a 90 degree FOV provided a good angle to perceive objects. It is standard in many first person games and does not allow an excessive view of the environment
For our fitness function, every metric was given a weight of 0.1, except for our agent which had all the metrics loaded in, this combination of metrics was given a weight of 0.5 This was because the combined metrics represent a holistic evaluation of the level's exploratory potential, integrating the insights provided by all the individual metrics. The agent that incorporates all these metrics is likely to offer a more accurate and comprehensive assessment of the level's quality for exploration, capturing the interplay between different motivations for exploration that might be missed when metrics are considered in isolation.
By giving more weight to the combined metrics, the fitness function emphasises the importance of a balanced exploration experience, where all factors are considered in tandem rather than in isolation. This approach ensures that levels which perform well across all dimensions of exploration are more highly rated, reflecting the belief that such levels are more likely to provide a rich, engaging experience for exploration.
§ RESULTS
The motivation histogram for the unengaging levels vs the engaging levels shows much higher motivation frequencies, on average, for every metric tested, on the engaging levels rather than the unengaging levels. This shows that the paths followed by each metric were considered interesting, at least, much more interesting than our unengaging levels. This also suggests that there were more objects/phenomena that were interesting compared to the unengaging levels.
There were larger friquencies of low or 0 motivation (along with frequencies to high motivation due to how the metric works), particularly for large object detection and openness, in the engaging levels compared to the unengaging levels; this is due to the unengaging levels having a smaller spread of objects throughout their levels, so the agent was spending more time in particular areas of the map where most of the objects were placed. This is also evidenced by the trajectory plots [https://github.com/BKhaleque/Evaluating-Environments-using-Exploratory-Agents].
The novelty histograms for the engaging levels compared to the unengaging levels shows significant differences for each metric. They are much higher frequencies with a low novelty, on the unengaging levels (this is particularly evident for openness, large object detection, anticipation detection and all metrics), where not much/nothing that is novel is being observed by the agent. This suggests that the engaging levels have more types of objects, spread out more evenly across the levels, so what the agent is viewing can be considered less boring. Even the random agent shows a lower overall novelty score with less peaks and dips.
The average coverage does not show significant differences between all metrics at all levels. All metrics does show slightly higher average coverage on the engaging levels than the unengaging levels, as well as the random agent. This goes against our expectations. Since both the engaging and unengaging levels have roughly the same navigability, there are no physical barriers or obstacles that would prevent the agent from moving through the space. This uniformity in navigability means that the agent can traverse the entire level without being hindered, leading to similar coverage metrics across different levels, irrespective of their design or engagement factors.
The average entropy does not show significant differences in entropy between the engaging and unengaging levels. Though all metrics shows slightly higher entropy in the engaging levels than the unengaging levels, which goes against our expectations. The reason why there This could be due to the agents explore the environment based on predetermined metrics and decision-making algorithms that systematically cover the space. Since the entropy measures the randomness and unpredictability of the agent's path, the systematic approach to exploration, where the agent randomly selects one of the highest scoring direction (even if those directions score 0), can lead to similar levels of entropy regardless of the level's engaging features. The agent's behavior might inherently limit the variation in its path, leading to similar entropy values across different environments
There are large differences between inspection, particularly for Anticipation detection, large object detection and all metrics, the unengaging levels show much higher inspection percentages (that's probably because there are less objects to investigate in the unengaging levels) However, all metrics shows much higher inspection on the engaging levels than the unengaging levels, this suggests that there was more motivation to investigate objects in the engaging levels than the unengaging.
The fitness scores for both engaging and unengaging levels, demonstrate a clear distinction in the exploratory potential of the levels generated by the two different procedural content generators. As shown in Table <ref>, the fitness scores for the engaging levels are consistently higher for every level, ranging from 0.703 to 0.916, with an average fitness score across all engaging levels of approximately 0.808. This suggests that the engaging levels are well-suited for exploration, offering a rich and varied environment that aligns with the exploratory behaviors of the agents.
Table <ref> shows significantly lower fitness scores for the unengaging levels, with values ranging from 0.168 to 0.673 and an average fitness score of approximately 0.494. The lower scores in these levels indicate that they are less appropriate for engaging exploration, due to a lack of engaging features and/or a more repetitive and predictable structure. The other data (histograms and measurements of average inspection) also support this. The stark difference in average fitness between engaging and unengaging levels (0.808 vs. 0.494) highlights the effectiveness of our classifier in distinguishing between levels with high and low exploratory potential, further supporting the utility of exploratory agents in procedural content generation.
§ FUTURE WORK
Future work could explore a broader range of PCG techniques beyond the current WFC constraint-based systems. For example, incorporating evolutionary algorithms with the fitness function used in this experiment to determine whether they are high-quality exploratory experiences could provide insight into how different generation strategies impact the exploratory behaviour of agents. This would allow for a more comprehensive understanding of the strengths and weaknesses of various PCG approaches. Testing a wider variety of levels, bigger or smaller in area, with a wider range of assets, could also help with this.
Also, though agent-based exploration is valuable for evaluating PCG environments, incorporating human/player feedback could enhance the understanding of how these environments support real player experiences. Conducting user studies where human players interact with the generated levels and comparing their exploration patterns and experiences with those of the agents could provide deeper insights into the alignment between agent-based metrics and actual player engagement.
§ CONCLUSION
In conclusion, there is evidence to suggest that our exploratory agent can distinguish between engaging and unengaging levels that have been generated procedurally. Future work should aim to test a larger sample size of generated levels and to have agents provide feedback on a generation process to produce higher quality PGC.
This work was supported by the EPSRC Centre for Doctoral Training in Intelligent Games & Games Intelligence (iGGi) EP/S022325/1.
|
http://arxiv.org/abs/2409.03021v1 | 20240904182712 | CLUE: Concept-Level Uncertainty Estimation for Large Language Models | [
"Yu-Hsiang Wang",
"Andrew Bai",
"Che-Ping Tsai",
"Cho-Jui Hsieh"
] | cs.CL | [
"cs.CL",
"cs.LG"
] |
Machine-aided guessing and gluing of unstable periodic orbits
Tobias M. Schneider
September 9, 2024
=============================================================
§ ABSTRACT
Large Language Models (LLMs) have demonstrated remarkable proficiency in various natural language generation (NLG) tasks.
Previous studies suggest that LLMs' generation process involves uncertainty.
However, existing approaches to uncertainty estimation mainly focus on sequence-level uncertainty, overlooking individual pieces of information within sequences.
These methods fall short in separately assessing the uncertainty of each component in a sequence.
In response, we propose a novel framework for Concept-Level Uncertainty Estimation (CLUE) for LLMs.
We leverage LLMs to convert output sequences into concept-level representations, breaking down sequences into individual concepts and measuring the uncertainty of each concept separately.
We conduct experiments to demonstrate that CLUE can provide more interpretable uncertainty estimation results compared with sentence-level uncertainty, and could be a useful tool for various tasks such as hallucination detection and story generation.
§ INTRODUCTION
Large Language Models (LLMs) have demonstrated powerful abilities in generating human-like text and attaining exceptional performance in various Natural Language Processing (NLP) tasks.
Previous studies indicate that the generation process of LLMs involves uncertainty (, ).
This uncertainty arises from the stochastic nature of the sampling process in LLMs, leading to the generation of different outputs for the same given input.
Measuring the uncertainty in LLM generation is important, as it can serve as a crucial indicator, offering insights into the reliability or diversity aspects of specific tasks.
For example, in a question-answering (QA) task, high uncertainty in the model's output could be interpreted as a form of hallucination, deviating from the expectation of producing consistent answers.
In contrast, in the context of a story generation task, high uncertainty could become a favorable characteristic, contributing positively to the diversity of the generated stories.
Therefore, understanding and quantifying uncertainty in LLM outputs become essential, allowing for task-specific evaluations and ensuring the desired outcomes in various applications.
Various methods exist for measuring the uncertainty of LLMs' output.
Previous approaches have primarily focused on measuring uncertainty at the sequence level (, ), treating an entire generated sequence as a single unit.
These methods are often used to detect hallucinations by identifying output sequences with high uncertainty.
However, a single sequence may contain multiple pieces of information, each with different uncertainty levels.
Therefore, these methods encounter the “information entanglement issue”, where they can only measure the overall uncertainty of an entire sequence.
This limitation hinders a nuanced evaluation of individual components.
For example, as illustrated in Table <ref>, the output sequence in each sample may include both consistent information and distinct details.
Sequence-level methods fail to discern the uncertainty of each component.
To address the information entanglement issue, we proposed a framework for Concept-Level Uncertainty Estimation (CLUE) for LLMs.
Concepts represent the fundamental meaning of the text, independent of sequence structure or individual lexicons.
We use LLMs with handcrafted one-shot example to extract comprehensive concepts from the generated output sequences.
Each extracted concept is treated as an independent unit, and its uncertainty is measured separately.
The extracted concepts are then evaluated by an NLI-based zero-shot text classifier, which assigns the predicted entailment score as the concept score.
Lastly, the uncertainty is determined by the average negative logarithm of the concept score with respect to each output sequence.
The details of the framework are presented in Section <ref>.
We demonstrate the effectiveness of CLUE in concept-level hallucination detection and its application as a conceptual diversity metric for story generation.
Our experimental results validate the assumption that highly uncertain concepts are more likely to be hallucinations in tasks requiring consistent output.
Furthermore, CLUE demonstrates a 21% improvement in macro AUROC over the baseline method in detecting hallucinations on QA datasets.
To evaluate CLUE's efficacy in addressing the information entanglement issue, we compare its accuracy in predicting human judgments with sequence-level methods using Amazon Mechanical Turk (AMT).
The results reveal that it exhibits a 33% higher accuracy, indicating that our concept-level method better aligns with human judgments and is thus easier for humans to understand.
We also introduce the utility of CLUE as a conceptual diversity metric for story generation.
§ MOTIVATION
§.§ Information Entanglement Issue
Previous sequence-level uncertainty methods are limited to assessing uncertainty for the entire sequence.
Given that paragraph-length sequences encompass vast amounts of information, prior methods primarily focus on sequences of sentence length.
Nonetheless, even a single sentence can be lengthy and filled with extensive information.
As shown in Table <ref>, a sentence-long sequence may still encompass multiple pieces of information simultaneously.
Addressing this challenge necessitates breaking sequences down into distinct pieces of information and evaluating their uncertainty individually.
§.§ Breaking Down Sequences
To extract information contained in each sequence, it is essential to break down sequences into their constituent components.
Various methods exist for sequence breakdown, such as tokenization, named-entity recognition (NER), and syntax tree parsing.
Different methods lead to varying levels of information.
For example, tokenization breaks down sequences into tokens, representing the lowest level of information in natural language.
To enhance generalization ability, we employ LLM prompting to break down sequences into information pieces.
By designing few-shot examples for LLMs, we can easily adjust the information level.
In this paper, we focus on extracting high-level concepts, which effectively capture key meanings or ideas from the given text while disregarding lexical information and sequence structure.
§ RELATED WORK
§.§ Uncertainty Estimation for LLMs
There are numerous methods to measure uncertainty in LLMs.
From the algorithmic aspect, uncertainty estimation can be categorized into two types: token-based and sampling-based methods.
Token-based uncertainty relies on the output probabilistic distribution for each token from LLMs (, , , , ).
These methods directly measure the uncertainty of the generated sequence based on this distribution.
However, they cannot be used for black-box LLMs when the output probabilistic distribution is not available. Further, the output probability is often over-confident and may not reflect the actual uncertainty.
In contrast, sampling-based uncertainty methods generate multiple samples from the same input prompt and calculate the uncertainty based on these output sequences (, ).
For example, propose Sample VRO, which is calculated based on the similarity between multiple output samples.
Sampling-based methods only require output sequences to calculate uncertainty, thereby making them more applicable across a wider range of LLMs.
From the uncertainty level aspect, previous uncertainty methods can be categorized into three levels: sequence-level, token-level, and word-level.
Sequence-level methods treat the entire output sequence as a single unit and assess its uncertainty (, , , , , , , , ).
Notably, most of the sampling-based sequence-level methods can only handle single-sentence sequences.
Token-level approaches directly measure the uncertainty of individual output tokens (, , ).
Most of them leverage output token probabilities and employ functions such as entropy or the negative logarithm of the probability for uncertainty estimation.
Word-level methods involve extracting keywords from output sequences and subsequently evaluating the uncertainty associated with each identified keyword ().
The distinction between word-level and concept-level approaches lies in their functionality.
Word-level methods only identify keywords present in the output sequences, whereas concept-level methods directly generate concepts based on the key meaning of the output sequence.
§.§ Hallucination in LLM Generation
Hallucination in LLMs refers to the generation of content that deviates from the input prompt or may lack grounding in reality.
It is important to note that hallucination may present as a factual output but is not relevant to the input prompt.
Several comprehensive surveys have been conducted to explore hallucination in LLMs (, , , ).
In order to improve the reliability of LLMs, extensive studies have been dedicated to the detection of hallucinations (, , , , , , , , , , ).
Specifically, some approaches leverage the uncertainty in LLMs to identify unreliable content as hallucinations (, , ).
Furthermore, numerous studies focus on mitigating hallucinations through self-refinement by LLMs (, , , , , , , ).
In this paper, we focus on utilizing concept-level uncertainty to detect hallucinations that deviate from the input prompt.
§ METHODOLOGY
We propose a novel framework, CLUE, to measure the uncertainty of LLMs at the concept level.
CLUE extracts concepts from output sequences in each sample and then assesses concept uncertainty based on the corresponding concept score to each output sequence.
An overview of our framework is presented in Figure <ref>.
§.§ Concept Extraction
Concepts are high-level representations of texts, reflecting the meaning of sequences.
To measure the uncertainty at the concept level, we extract concepts from the generated sequences by prompting LLMs.
Inspired by , we feed handcrafted one-shot example to guide LLMs in generating concepts consistently, as presented in Table <ref> in the Appendix.
Our analysis reveals that the length, subject, and quantity of examples barely affect the consistency of extracted concepts.
We present some examples of generated sequences alongside their corresponding extracted concepts in Table <ref> in the Appendix.
We extract a set of concepts for each output sequence.
Since each output sequence is different, the extracted concepts also vary.
To comprehensively capture the information that may be generated by the LLM, we combine the sets of concepts extracted from each output sequence to form a unified concept pool.
The concept pool is composed of the possible concepts generated by the LLM based on the given prompt.
Since some extracted concepts may exhibit high similarity, we use an NLI-based zero-shot text classifier to automatically consolidate similar concepts, retaining only one instance.
For example, consider the two closely related concepts: “Limited competition among ISPs” and “Lack of competition in broadband market”, we randomly select one of these concepts to condense the concept pool.
The zero-shot text classifier is employed to measure the similarity between concepts by computing their mutual entailment scores.
The two concepts are regarded as equivalent if both entailment scores are higher than the predefined threshold.
The threshold is set at 0.99 to ensure stringent selection, allowing only very similar concepts to be considered equivalent.
The details of the classifier are presented in Appendix <ref>.
§.§ Concept-level Uncertainty Calculation
§.§.§ Concept Scorer
To measure the concept score based on the relevance between concepts and each output sequence, we design a concept scorer f using an NLI-based zero-shot text classifier.
Given a sequence o_i and a concept c_j, the NLI-based zero-shot text classifier determines whether o_i entails c_j and outputs a probability of entailment.
High entailment probability indicates that c_j is a concept of o_i.
We adopt the entailment probability as the concept score s_ij.
The details of the classifier are presented in Appendix <ref>.
s_ij = f(o_i,c_j).
§.§.§ Uncertainty Calculation
We measure the concept score for each concept with respect to each sampled output sequence using the concept scorer.
The concept uncertainty is determined by calculating the average of the negative logarithm of the concept score
U(c_j) = Avg_i(-log(s_ij)) = -1/N∑_ilog(s_ij),
where U(c_j) denotes the uncertainty of the concept c_j, and N is the number of samples.
Since we employ a sampling-based method for uncertainty calculation, our approach is applicable to both white-box and black-box LLMs.
§ EXPERIMENTS
We conduct experiments on various NLP tasks to demonstrate the utility of the proposed framework.
In Section <ref>, we illustrate how CLUE detects hallucination at the concept level, which is more intuitive for humans to comprehend compared to sequence-level methods.
In Section <ref>, we extend our framework to another application as a conceptual diversity metric for story generation.
§.§ Experimental Settings
We evaluate the effectiveness of CLUE using question-answering (QA) datasets, which comprise multiple positive and negative instances.
In the context of QA, high uncertainty indicates unpredictability in the generated output.
Since stability and consistency are expected in QA tasks, high uncertainty implies potential hallucinations.
To prove this statement, we partition the QA datasets into three derivative subsets: the relevant subset D_R, the less relevant subset D_L, and the irrelevant subset D_I.
D_R and D_L consist of positive and negative instances, respectively, while D_I contains questions paired with answers randomly selected from other instances.
It is noteworthy that the answers in D_L are more accurate than those in D_I, as the incorrect answers of QA datasets are still crafted to respond to the corresponding questions.
An illustrative example of the distinctions among the three subsets is presented in Table <ref>.
We subsequently compute the answer concept score S^a_j for each subset to represent the relevance between the answer a and the concept c_j using the Concept Scorer f:
S^a_j = f(a, c_j).
These answer concept scores then serve as the ground truth for the subsequent evaluation.
§.§.§ Models
We conduct experiments using OpenAI's GPT-3.5-turbo-instruct model.
During the sampling stage, we set the temperature to 1 and generate N=5 samples to produce different outputs while preserving the necessary contextual information for coherent and meaningful responses.
In the Concept Extraction stage, we set the temperature to 0 to ensure more stable and deterministic results for the extracted concepts.
Additionally, we adopted the NLI-based zero-shot text classifier “bart-large-mnli” [<https://huggingface.co./facebook/bart-large-mnli>] for our concept scorer.
It is based on the bart-large model (), pretrained on the MNLI dataset ().
§.§.§ Datasets
We select three datasets with different characteristics for a thorough evaluation.
ELI5-Category is a long-form QA dataset with paragraph-like answers.
WikiQA consists of simple answers, each sequence comprising only one sentence.
QNLI is an NLI-based QA dataset that includes answers categorized as either entailing the corresponding questions or not.
We construct three subsets D_R, D_L, and D_I for each dataset.
ELI5-Category
The ELI5-Category dataset () is a more recent and compact variant of the original ELI5 dataset ().
It is constructed by collecting questions and their answers from subreddit.
Each instance contains a single question paired with multiple answers, with each answer being assigned a score.
The score is determined by subtracting the number of downvotes from the number of upvotes given by annotators.
A higher score indicates a better answer.
In our experiment, we select answers with the highest and lowest scores for D_R and D_L.
As for D_I, we randomly choose an answer from another instance to serve as the irrelevant answer.
WikiQA
The WikiQA dataset () consists of 3,047 questions initially sampled from Bing query logs.
Each instance comprises a single question along with multiple answers, where the answers are sentences extracted from the corresponding Wikipedia page related to the question's topic.
Annotators have labeled each answer as either correct or incorrect.
In our experiment, we randomly choose one correct answer, one incorrect answer, and one irrelevant answer from another instance to form D_R, D_L, and D_I, respectively.
QNLI
The QNLI (Question-answering Natural Language Inference) dataset () is a Natural Language Inference dataset derived from the Stanford Question Answering Dataset v1.1 (SQuAD) ().
Each instance consists of a question associated with a sentence labeled either as “entailment” or “not entailment”.
In our experiment, we select instances with “entailment” sentences as D_R and those with “not entailment” sentences as D_L.
For D_I, we arbitrarily choose a sentence from another instance as the answer.
§.§ Uncertainty-based Concept-level Hallucination Detection
To demonstrate the application of our method for concept-level hallucination detection, we first validate the assumption that high uncertainty in output suggests hallucination.
Building upon this assumption, we evaluate the effectiveness of CLUE in detecting hallucinations.
We further conduct a human study showing that concept-level uncertainty is better than previous sequence-level uncertainty as it is easier for humans to understand.
§.§.§ Motivating Experiment
To verify the assumption that high uncertainty in outputs suggests hallucination, we examine the correlation between the concept uncertainty U(c_j) and the answer concept score S^a_j across all concepts for each instance.
We then compute the average correlation across all instances for three dataset subsets D_R, D_L, and D_I.
Since the answer concept score indicates the relevance between the concept and the answer, a low correlation implies that concept uncertainty can serve as an indicator of the concept's irrelevance to the answer.
In D_R, where answers are logically connected to the questions, concepts irrelevant to the answer are considered hallucinations.
We expect a low correlation if the assumption holds.
Conversely, in D_I, where answers are randomly selected from other instances, the answer concept score is not expected to exhibit a clear linear relationship with uncertainty.
Therefore, we anticipate the correlation for D_I to approach 0.
Regarding D_L, the correlation is expected to fall between that of D_R and D_I, given its intermediary relevance to the questions.
We present the experiment results of Pearson correlation between concept uncertainty and answer concept score in Table <ref>.
As expected, across three subsets of the datasets, the correlation trend adheres to the following pattern: D_R exhibits a lower correlation than D_L, and D_L shows a lower correlation than D_I.
The results demonstrate that across QA datasets with various characteristics, they consistently validate our assumption that concepts with high uncertainty tend to be hallucinated concepts.
This suggests that the uncertainty of the LLM is an effective measure for assessing the faithfulness of the output across diverse circumstances.
We present an example of the correlation experiment in Table <ref> in the Appendix.
§.§.§ Concept-level Hallucination Detection
Based on the assumption that high uncertainty in outputs suggests hallucination, we proceed to evaluate the efficacy of uncertainty in detecting hallucination.
We formulate this as a classification task and use concept uncertainty to conduct classification.
To achieve this, we first construct a concept set to be classified, and the label of each concept is determined by its answer concept score, as illustrated in Equation <ref>.
To enhance precision in concept labeling, we employ two thresholds, a high threshold θ_h and a low threshold θ_l, applied to the concept scores to determine the concept labels:
label of concept c_j =
0 if S^a_j > θ_h
1 if S^a_j < θ_l
-1 otherwise.
A concept is categorized as an “entailed concept” (label 0) if its score surpasses the threshold θ_h.
Conversely, if the score falls below θ_l, the concept is designated as a “hallucinated concept” (label 1).
For this experiment, we do not consider other concepts (label -1).
We exclusively apply this task on D_R since we require accurate answers from positive instances to label concepts.
As for the metrics, we employ AUPRC (Area Under Precision-Recall Curve) along with AUROC (Area Under the Receiver Operating Characteristic Curve) to evaluate the classification performance.
Given that each instance contains a concept pool with multiple concepts to be classified, it can be viewed as an independent classification task.
We present both macro and micro versions of these two metrics to provide insights into the overall performance across all classifications.
Additionally, we compare CLUE to the NLI-based zero-shot classifier “bart-large-mnli” to demonstrate the efficacy of our approach.
The details of the classifier are presented in Appendix <ref>.
The results of the concept-level hallucination detection experiment are presented in Table <ref>.
CLUE achieves remarkable performance, significantly outperforming the baseline method in detecting hallucinations.
Due to the disparity in units between our method and sequence-level uncertainty, direct comparisons of hallucination detection performance with previous methods are not feasible.
Table <ref> provides an example to illustrate that the primary issue with sequence-level uncertainty lies not in its performance but in its unit.
The ablation studies on the thresholds of concept scores are presented in Appendix <ref>.
§.§.§ Human Study
To show that concept-level uncertainty is easier for humans to comprehend, we conduct an experiment directly comparing it with sequence-level uncertainty through human evaluation.
We generate 100 instances, each comprising a question, along with 2 output sequences and 2 extracted concepts.
One sequence and concept exhibit high uncertainty, while the other sequence and concept demonstrate low uncertainty.
We treat this task as a binary classification problem and assess the accuracy of using uncertainty to predict the irrelevant option.
We employ SelfCheckGPT-NLI () as the sequence-level method for comparison.
The instances are labeled using Amazon Mechanical Turk (AMT), where MTurkers are asked to select the concept and sequence they deem more relevant to the given question, as presented in Figure <ref> and Figure <ref> in the Appendix.
To ensure the reliability of human annotations, we assign five distinct MTurkers to each instance.
The label of each instance is determined based on the option selected by the majority of the MTurkers, i.e. more than 2.
The results are presented in Table <ref>.
Our concept-level method exhibits a 33% higher accuracy compared to the sequence-level approach.
Our findings indicate that concept-level uncertainty correlates more closely with MTurkers' judgments.
This suggests that CLUE serves as a more effective indicator of the relevance of generated information to the question.
§.§ Conceptual Diversity Metric for Story Generation
As detailed in Appendix <ref>, previous diversity metrics fall short in capturing high-level features such as tone or genre of generated stories.
In this section, we extend the application of our framework to serve as a conceptual diversity metric in story generation.
§.§.§ Method
Since uncertainty cannot directly be used to represent diversity, we define a two-level concept structure: an upper-level concept representing a conceptual feature of generated stories, with lower-level concepts as its subclasses.
For example, consider the overarching concept “tone”, which includes more specific sub-concepts like “happy tone”, “sad tone”, “humorous tone”, and so forth.
We measure the diversity of the upper-level concept by aggregating the uncertainty of its lower-level concepts.
Given that high uncertainty in lower-level concepts indicates that fewer generated stories are considered as the same subclasses, the aggregated uncertainty of lower-level concepts can be regarded as the diversity of the upper-level concept.
We further propose two aggregation functions: the harmonic mean and entropy.
The former directly measures the harmonic mean of the uncertainty of all lower-level concepts, while the latter treats it as a multi-class classification problem and measures the entropy of the classes.
The equations are listed below:
Harmonic mean = M/∑_j=1^M 1/U(c_j),
Entropy = -∑_j=1^M n(c_j)/Nlog(n(c_j)/N),
where c_j denotes the j-th lower-level concept in this experiment, N is the number of samples, M is the number of concepts, and n(c_j) is the number of samples classified as c_j:
n(c_j) = ∑_imax_k (f(o_i, c_k)) * δ_jk,
δ_jk =
1 if j = k
0 otherwise.
§.§.§ Qualitative Analysis
To illustrate, we create 1000 stories by prompting LLMs to generate stories with a happy tone.
We define a set of two-level concepts with an upper-level concept “tone” and 5 lower-level concepts “happy tone”, “sad tone”, “humorous tone”, “serious tone”, and “romantic tone”.
As depicted in Table <ref>, the concept scorer effectively identifies the stories with a happy tone, resulting in significantly lower uncertainty compared to the other lower-level concepts.
Consequently, in the harmonic mean function, the low uncertainty term predominates in the denominator, leading to low diversity.
We further create datasets with different diversity to evaluate our metrics.
The experimental details are listed in Appendix <ref>.
§ CONCLUSION
In this paper, we propose a novel framework for Concept-Level Uncertainty Estimation (CLUE) for LLMs.
Our framework separates sequences into multiple concepts and measures their uncertainty individually, successfully addressing the information entanglement issue.
We showcase the versatility of our framework by applying it to hallucination detection and as a conceptual diversity metric for story generation.
We hope the proposed concept-based approach can achieve a more “interpretable” uncertainty estimation and can facilitate the interaction between human and LLMs.
§ LIMITATIONS
First, a key limitation of CLUE is its dependency on the chosen LLM for concept extraction and the specific concept scorer utilized.
In this work, we generate a prompt with a one-shot example to improve the consistency of concept extraction.
In future work, we will explore employing alternative white box methods for concept extraction to enhance the reliability of our framework.
Second, the lack of high-level feature diversity metrics for story generation prevents us from benchmarking CLUE's performance.
However, given the customizable nature of our framework's two-level concept structure, it remains applicable across more scenarios.
In future work, we aim to propose a benchmark for high-level feature diversity measurement in story generation, with CLUE serving as the baseline.
§ ETHICAL CONSIDERATION
We propose a framework for LLMs to estimate the concept-level uncertainty of generated content.
The method is designed to improve LLMs' interpretability and improve human-LLM interactions. However, we do believe there could be certain risks if human over-trust the proposed uncertainty estimation tool. For example, there could be implicit biases in LLMs so that the generated biased content will be associated with low uncertainty. Therefore, when using the uncertainty estimation tool, we need to keep in mind that the estimation is measuring the LLM-generated uncertainty, not the true uncertainty of a particular concept. On the other hand, it is also possible that uncertainty estimation is manipulated by adversarial attacks, and further studies are required to improve the robustness of uncertainty estimation against those attacks.
§ NLI-BASED ZERO-SHOT TEXT CLASSIFIER
An NLI-based zero-shot text classifier operates by predicting three logits, each representing the degree of the relationship between the premise and the hypothesis for the labels: “entailment”, “contradiction” and “neutral”.
Following the instructions from the bart-large-mnli website, we disregard the “neutral” label and apply a softmax layer to the remaining two logits to derive the probability associated with the “entailment” label:
f(o_i,c_j) = σ(cls(o_i,c_j)==entailment)
σ(x_i) = exp(x_i)/∑_j exp(x_j)
where σ denotes softmax function and cls denotes the classifier.
In our framework, we employ the NLI-based zero-shot text classifier for three purposes: concept consolidation, the concept scorer, and the baseline method for the hallucination detection task.
For concept consolidation, the classifier computes the mutual entailment score of extracted concepts as their similarity.
One concept serves as the premise, and the hypothesis is generated by transforming the other concept into the following format: “This concept is similar to PREMISE_CONCEPT”.
Regarding the concept scorer, the classifier treats the output sequence as the premise and generates the hypothesis for each concept by transforming it into the following format: “This example is about CONCEPT”.
As for the baseline method, the classifier considers the question as the premise and generates the hypothesis for each concept by transforming it into the following format: “This question is relevant to CONCEPT”.
§ ABLATION STUDIES OF HALLUCINATION DETECTION
We further conduct ablation studies on the thresholds of concept scores. As illustrated in Figure <ref>, our method demonstrates better performance across all datasets when employing tighter thresholds – specifically, a higher θ_h and a lower θ_l. This observation implies that the scores predicted by our concept scorer effectively reflect the concept's faithfulness.
§ DIVERSITY METRIC FOR STORY GENERATION
§.§ Related Work
Extensive research has leveraged LLMs for story generation tasks, and various metrics have also been introduced to evaluate the diversity of generated stories.
Existing metrics commonly rely on quantifying diversity through measures such as the count of distinct n-grams (, , , ), or by employing BLEU or ROUGE scores (, , , , ).
However, these metrics are confined to measuring lexical diversity and fail to capture high-level features such as tone or genre in story generation.
While some diversity metrics based on text embeddings have been proposed to address this limitation (, ), their applicability to story generation tasks remains unexplored.
§.§ Evaluation of Diversity Metric
To evaluate the effectiveness of our method as a diversity metric, we create three small datasets containing stories generated in different tones, as illustrated in Table <ref>.
These datasets exhibit distinct distributions, with the highest expected diversity in the uniform distribution dataset and the lowest diversity in the single-class dataset.
We utilize the prompt “Generate a story in happy/sad/humorous/serious/romantic tone in five sentences.” to generate the stories.
The experiment results are presented in Table <ref>, demonstrating that the two proposed diversity metrics both effectively capture the diversity of the upper-level concept 'tone'.
|
http://arxiv.org/abs/2409.03110v1 | 20240904223317 | MSTT-199: MRI Dataset for Musculoskeletal Soft Tissue Tumor Segmentation | [
"Tahsin Reasat",
"Stephen Chenard",
"Akhil Rekulapelli",
"Nicholas Chadwick",
"Joanna Shechtel",
"Katherine van Schaik",
"David S. Smith",
"Joshua Lawrenz"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
What is Normal? A Big Data Observational Science Model of Anonymized Internet Traffic
Research was sponsored by the Department of the Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of the Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. Use of this work is controlled by the human-to-human license listed in Exhibit 3 of https://doi.org/10.48550/arXiv.2306.09267
Jeremy Kepner^1, Hayden Jananthan^1, Michael Jones^1, William Arcand^1, David Bestor^1, William Bergeron^1,
Daniel Burrill^1, Aydin Buluc^2, Chansup Byun^1, Timothy Davis^3, Vijay Gadepally^1, Daniel Grant^4, Michael Houle^1,
Matthew Hubbell^1, Piotr Luszczek^1,5, Lauren Milechin^1, Chasen Milner^1, Guillermo Morales^1, Andrew Morris^4,
Julie Mullen^1, Ritesh Patel^1, Alex Pentland^1, Sandeep Pisharody^1, Andrew Prout^1, Albert Reuther^1, Antonio Rosa^1,
Gabriel Wachman^1, Charles Yee^1, Peter Michaleas^1
^1MIT, ^2LBNL, ^3Texas A&M, ^4GreyNoise, ^5University of Tennessee
September 2024
=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
§ INTRODUCTION
A musculoskeletal soft tissue tumor (MSTT) is an abnormal growth or mass that develops within the
soft tissues of the body that support and connect the musculoskeletal system <cit.>. Soft tissues encompass a
variety of structures, including muscles, tendons, ligaments, fat, blood
vessels, nerves, and connective tissues. MSTTs can arise from any
of these tissues and can be either benign (non-cancerous) or malignant
(cancerous).
Benign MSTTs, such as lipomas or fibromas, typically grow slowly
and do not invade surrounding tissues or metastasize to other parts of the body.
Malignant MSTTs, on the other hand, can be aggressive and have the
potential to spread to nearby organs or distant sites, posing a more significant
health risk.
<cit.>
Tumor segmentation allows for precise delineation of the tumor boundaries,
providing accurate measurements of size and shape. This information is
essential for disease staging and determining the appropriate treatment
strategy <cit.>. Moreover, precise segmentation facilitates the
monitoring of tumor progression and response to therapy over time, enabling
clinicians to make timely adjustments to the treatment plan
<cit.>. Additionally, segmentation enables the identification of
heterogeneous regions within the tumor allowing diagnosis of varying levels of
malignancy <cit.>.
A crucial step of building an automated model
that identifies benign and malignant tumors is the manual segmentation of the
tumor <cit.>.
Segmentation is challenging, as tumor appearance can vary in shape, intensity,
and tissue composition <cit.>. Additionally, the presence of
artifacts such as noise, motion, and magnetic susceptibility can further obscure
tumor boundaries, making segmentation challenging. Moreover, the lack of
standardized protocols for acquiring MRI data, leading to variations in image
contrast and quality across different institutions and scanners adds to the
difficulty.
Even if the clinician has sufficient expertise, manual delineation of tumors in three dimensions is
a time-consuming process, taking up to half an hour per
MRI volume <cit.>.
In recent times, researchers have strived to automate the MSTT
segmentation process by employing various classical machine learning
<cit.> and deep learning <cit.> based methods. Researchers have explored models
that take both single and multimodal images (MRI, PET, CT scans) as input and
predict tumor segmentation. These models have been trained and evaluated using a
small dataset of 51 patients presented in <cit.>.
The progress of automatic MSTT
segmentation models has lagged due to the unavailability of large diversified datasets.
To address this problem we have curated a dataset and trained a segmentation
model using the data. The contributions of this paper are fourfold:
* We created an MSTT segmentation dataset with 199 patients
and plan to make it publicly available for future research;
* We described our process of selecting the patients, setting up the
labeling platform, the annotation protocol, and the curation method;
* We created a segmentation model based on the curated data which
achieves state-of-the-art (SOTA) result on the only available public
dataset and analyzed the results; and
* We identify easy versus hard-to-detect types of tumor and make
suggestions for future model development as well as data collection.
The paper is organized in the following sections. Section <ref>
contains the detailed creation of the dataset. Section <ref> includes
the architectural explanation of the segmentation models used in this work.
Section <ref> contains the experimental setup and result analysis.
And finally the paper is concluded in Section <ref>.
§ MSTT-199: DATASET DESCRIPTION
In this section, we describe the
process of patient selection, data annotation, annotation protocol, and data
curation method.
§.§ Patient Selection
After receiving institutional review board approval, we queried our
institution's orthopaedic oncology registry, which includes all patients treated
for an MSTT at our institution since 1987. Using
this registry, we initially identified 2,639 patients who underwent definitive
oncological resection at Vanderbilt University Medical Center and had one of the
following diagnoses on final pathological review: schwannoma (benign nerve
tumor), MPNST (malignant peripheral nerve sheath tumor), well-differentiated liposarcoma (benign fat tumor), dedifferentiated liposarcoma (malignant fat tumor), desmoid fibromatosis (benign fibrous tumor), undifferentiated pleomorphic sarcoma
(malignant fibrous tumor), hemangioma or arteriovenous malformation (benign
vascular tumor), angiosarcoma (malignant vascular tumor), myxoma (benign myxoid
tumor), or myxoid fibrosarcoma (malignant myxoid tumor). The tissue type of the tumors were divided in five broad categories Fibrous, Fat, Myxoid, Nerve, and Vascular.
We retrospectively
reviewed the electronic health records of patients in reverse chronological
order and sequentially included patients who had a tumor with largest dimension
greater than 3 cm, and a pre-operative MRI that included both an axial T1 and an
axial T2 fat-saturation sequence. Patients with MRI sequences that were deemed
to be incomplete or of poor image quality as determined by a board-certified
orthopedic oncologist or musculoskeletal radiologist were excluded. Records were
reviewed until ∼40 eligible patients with each tumor tissue type (Fibrous, Fat,
Myxoid, Nerve, Vascular) were included, or all available records were exhausted.
This resulted in a collection of 199 patients with 199 MSTTs.
§.§ Labeling Platform
We selected LabelStudio <cit.> as our data annotation platform.
LabelStudio provides a web browser-based labeling platform that has minimal user
setup overhead at the annotator's end. Radiologists can log in through the
provided URL, create an account, and start annotating the assigned image.
LabelStudio provides a customizable user management system through which user
activity and dataset progress can be tracked with ease.
Both T1- and T2-weighted images were present for each tumor and registered
using ANTs <cit.> algorithm on the 3D Slicer software
<cit.>. The smaller image of the pair was also resampled to match
the resolution of the largest image. Following registration, the axial slices of
MRI images were exported to PNG files and listed on the project along with
necessary metadata such as patient ID, modality, type of tumor, slice instance
number, anatomy, etc. While logged in to the platform, the annotator could
search images via patient ID and modality and sort by instance number. After
selecting an image, the annotator drew segmentation masks on top of the tumor
area using a brush tool. A screenshot of the annotation user interface is shown
in Fig. <ref>. The size of the brush tool can be varied for fine
or coarse masks. The choice of brush size creates a trade-off between accuracy
and annotation time.
§.§ Annotation Protocol
Annotations were done primarily on the T2-weighted images except for the Fat tumors which were mostly done on the T1-weighted images due to better tumor visibility. The axial slices of the images were uploaded on a privately deployed annotation platform called LabelStudio <cit.>. The annotators logged into the platform to annotate and submit their annotations.
The data was annotated in three stages. In the first stage, a radiologist (K)
annotated the center slice of the tumor present in each MRI. In the second
stage, three annotators (S, A, T
)
annotated the adjacent slices of the annotation following the guiding
annotations done by K. The questions or confusion that arose during the second
stage were all mitigated via discussion with the radiologist.
In the final stage, the complete annotations were sent to the radiologists, J,
K, and N
for final review. The patients were randomly divided among the three
radiologists and had one radiologist review per image.
The radiologists identified tumor area regions by looking at T2 signal hyperintensity or edema on fluid-sensitive sequences and also using the contrast between normal muscle or tumors and surrounding fat planes on the conventional T1 sequences. While annotating most of the central mass is demarcated; however, some tumors may have narrow tail-like extension from the margins of the mass which might be excluded. Radiologists took image noise and artifact into consideration during the interpretation of the MR images and discounted them without difficulty.
§.§ Dataset Statistics
The tissue type statistics and anatomy distribution in the dataset are shown in
Table <ref>, and examples of tissue images are
shown in Fig. <ref>. Although we assembled a balanced
dataset for each tissue type, it was difficult to keep the anatomical location
distribution balanced. MSTTs are most prevalent in the extremities
with very few occurrences in the trunk or head and neck region
<cit.>.
Fig. <ref> shows the tumor size variation for different
tissue types. The Fat tumors are usually larger in size, and the nerve and
vascular tumors are comparatively smaller.
Fig. <ref> shows the intensity distribution across the
tissue types. For fibrous, nerve, and vascular tissue types there is an overlap
in the distribution of the intensities from the two modalities. However, Fat
tumors show higher intensity on T1 (Fig. <ref>), whereas, the myxoid tumors show a higher intensity
distribution on T2. This justifies our choice to annotate the fat
tumors on T1 modality and the myxoid tumors on T2 modality.
§ SEGMENTATION MODELS
In this section, we describe the segmentation model architectures and the loss function used to train the model.
§.§ Architecture
We used the U-Net <cit.> architecture as our segmentation model.
The U-Net segmentation model is a convolutional neural network architecture
initially designed for biomedical image segmentation tasks
<cit.>. U-Net has two main components: an encoder and a decoder.
The encoder downsamples the input image in steps and reduces it to a
representation rich in information but with the loss of spatial resolution. The
decoder takes this representation and gradually upsamples it to reconstruct a
probability output having the spatial size of the image. The output map is
thresholded to create a binary mask of the foreground object. There are shortcut
connections (or skip connections) between different stages of the encoder and
decoder that have similar feature dimensions which enables information flow
between feature blocks. The encoder, decoder, and shortcut connections create a
U-shaped visualization for the model (Fig. <ref>), hence the name U-Net.
§.§.§ Encoder
The encoder consists of a series of convolutional blocks. Each block typically
consists of two consecutive convolutional layers followed by a rectified linear
unit (ReLU) <cit.> activation function and a max-pooling
layer. The purpose of this path is to capture context and spatial information
from the input image while reducing its spatial dimensions. As the network
progresses through the contracting path, the receptive field increases while the
spatial resolution decreases.
§.§.§ Decoder
The decoder is responsible for upsampling the feature maps to the original input
resolution. Each block in this path consists of an upsampling operation (usually
transposed convolution or interpolation), followed by concatenation with feature
maps from the encoder path, and then a series of convolutional layers. The
concatenation operation helps in preserving fine-grained details from the
encoder path, facilitating precise segmentation.
§.§.§ Shortcut Connections
One of the key features of U-Net is the skip connections that directly connect
corresponding layers between the encoder and the decoder. These connections
allow the network to bypass the loss of spatial information during downsampling
and aid in the precise localization of objects in the segmentation masks. Skip
connections provide a shortcut for gradient flow during training, helping to
mitigate the vanishing gradient problem and enabling faster convergence.
§.§.§ Final Layer
The final layer of the network consists of a 1× 1 convolutional layer
followed by a sigmoid activation function. This layer produces the segmentation
mask with the same spatial dimensions as the input image, where each pixel
represents the predicted class or label.
§.§ Segment Anything Model (SAM)
The segment anything model (SAM) <cit.> is a foundational
model proposed for image segmentation. It has been trained on a large dataset of
diverse images and can be applied to a wider range of segmentation tasks. This
model has been further fine tuned on a large-scale medical image segmentation
dataset with 1,570,263 medical image-mask pairs, covering 10 imaging modalities,
over 30 cancer types, and a multitude of imaging protocols to create the MedSAM
model <cit.>. The SAM model has three components, illustrated in
Fig. <ref>: an image encoder, a flexible prompt encoder, and a fast mask
decoder. These components are described at a high level here.
§.§.§ Image Encoder
SAM utilizes a pre-trained Vision Transformer (ViT) <cit.>
that is adapted to process high-resolution inputs. This encoder runs once per
image and can efficiently process the image before any prompting.
§.§.§ Prompt Encoder
In the original SAM model, two types of
prompts are considered: sparse (points, boxes, text) and dense (masks). These
prompts are transformed into an embedding and combined element-wise with the
image embedding. To make the model fully automated it can be supplied with the
bounding box of the full image and the model predicts masks for all available
foreground objects.
§.§.§ Mask Decoder
The mask decoder takes the image embedding and the prompt embeddings to produce
a mask. It employs a modified transformer decoder block which uses prompt
self-attention and cross-attention to update all embeddings
<cit.>. The output of these attention layers is upsampled,
and a fully connected layer maps the output token to a dynamic linear
classifier, which computes the mask foreground probability at each image
location.
§.§ Loss
Segmentation models are typically trained using stochastic gradient descent which
optimizes a loss function computed over the final layer.
We used binary cross entropy as our loss function:
ℒ_bce = y ln(p) + (1-y) ln(1-p), y∈{0, 1}.
§ EXPERIMENT AND RESULT ANALYSIS
In this section, we explain the details of the used datasets, along with the
parameters for data preprocessing and model training. Additionally, we analyze
the results produced from the experiments.
§.§ External Dataset
Alongside evaluating the trained models on the test partition of our dataset, we
performed out-of-domain evaluation on a publicly available soft tissue sarcoma
dataset (STS)<cit.>, accessible via The Cancer Imaging
Archive <cit.>. This dataset consisted of 51 patients with
sarcomas located in the extremities, sourced from various sites and scanners,
resulting in high heterogeneity. Each patient's data contained four different
imaging modalities, including two paired MRI scans (T1 and T2) and a PET/CT
scan. The MRI and PET/CT scans were conducted on different days, leading to
variations in body positioning and anatomy. Tumor annotations are already
provided in the dataset, delineated on the T2 scans. Additionally, we
co-registered each T1 image onto the corresponding T2 image using the ANTs
<cit.> algorithm in 3D Slicer. However, while processing
the dataset we found two image pairs
with excessive movement between the modalities that were excluded in our
analysis. The processed STS data in the 3D NIfTI format is publicly available at <provide url>.
§.§ Experimental Details
The 3D MRI images were resampled to have a voxel size of 1 mm × 1 mm
× 1 mm. The intensity values of each image were clamped between 0.05% and
99.95% of the distribution of that particular image intensities to exclude
outliers. The images were normalized using the min-max normalization procedure
where the minimum and maximum intensities were image dependent. In the axial,
sagittal, and coronal direction of the volume, 3 consecutive slices were grouped
to create a 2.5D slice. Slices with more than 100 tumor voxels were used to
train the model. We used 5-fold cross-validation over the 199 patients.
The input size to the model was 256 × 256. Padding or cropping was done if
needed. The augmentations used were random crop, horizontal, vertical flip,
gamma, brightness, contrast, Gaussian blur, motion blur, and grid distortion.
For the U-Net encoder, we used the pre-trained se_resnext50_32x4d
<cit.> model and for the SAM image encoder we used the
LiteMedSam model (a smaller version of the MedSAM model with a similar
performance) weights which was provide in <cit.>. While training the
segmentation models we fine tuned both the encoder and the decoder. The
performance of the models was evaluated using Dice coefficient (Dice),
i.e.,
Dice = 2TP/2TP+FP+FN
.
Here, TP is true positive, FP is false positive, FN is false negative.
The full details of the training parameters are listed in Appendix <ref>. During inference the images were resized to 256 × 256. We did test time augmentations during the inference (horizontal and vertical flip, rotated by 90) of each slice and the average output was taken. For each image volume, we conducted inference in the axial, coronal, and sagittal direction and averaged the output.
§.§ Result Analysis
In Table <ref>, we report the mean dice score achieved in the
5-fold cross-validation experiment in the MSTT-199 dataset as well as the STS
dataset and compare it with the existing model in the literature. For the STS
dataset, there is no predefined test set, so we evaluate our model on the whole
dataset. The existing Multi-Branch U-Net <cit.> model uses a
5-fold cross-validation approach (which involves domain-specific training).
Additionally, it uses multiple imaging modalities (MRI and PET) as input.
Whereas, our simpler U-Net with no domain-specific training, outperforms the
existing benchmark. This shows the diversity and usefulness of our dataset.
Lower Dice scores on the MSTT-199 test domain show that our dataset has much harder
samples compared to STS. Additionally, we observe that the LiteMedSAM model does
not outperform the U-Net-based model. This is probably because the LiteMedSAM did
not have an STT segmentation dataset in its large-scale pretraining phase and
may not have learned features that are related to the STT segmentation task. Additionally, it's pretraining task was a semi-supervised prompting based approach where a tumor bounding box was provided alongside images. Without this additional information the model performance seems to suffer.
In Table <ref>, we report the average Dice score
obtained across tissue types and show the distribution spread using boxplots in
Fig. <ref>. The models perform best in segmenting myxoid
and Fat tumors and perform worst in fibrous and vascular tissue types. Fat and
myxoid tumors generally have a large homogeneous structure and are easy to
differentiate from the background. Fibrous and vascular tissue often have a
heterogeneous structure making it difficult to differentiate from the
surrounding tissue. We visually confirm our observations in Fig. <ref>. A larger list of prediction failure montages
can be found in Appendix <ref>.
§.§ Effect of Volume
In general, larger tumor volumes are easier to segment for the model. Fig. <ref> shows the performance variation of the model across
different tissue types. Especially for the fibrous tissue type, there's a clear
trend of performance increase with volume.
§.§ Effect of Anatomy
In Table <ref>, we report the average Dice scores
across different tumor locations. As expected, the model does better in the
extremities compared to the non-extremity locations due to having more sample
representation in the training set.
§.§ Effect of Tumor Intensity
There is a weak positive correlation between average tumor intensity in the T1
image and dice scores for Fat tissue (Fig. <ref>). This
is expected as lipid content is more distinguishable in the T1 image. However,
for the rest of the tissue types, an increase in brightness has a no positive effect
on tumor localization as those tissues are generally darker in the T1 image.
§.§ Suggestion on Future Data Collection
Although our dataset is large enough to train a state-of-the-art model, our result analysis
suggests a dire need for a larger dataset to capture more diversity of
tissue structures. Although we have set up a balanced dataset in terms of tissue
types, there is an imbalance in terms of anatomy. In Table
<ref>, we see fibrous and vascular tissue
types have diverse anatomical representations. As observed in Table
<ref> and Table <ref>, these
tissue types have poorer Dice scores compared to myxoid and Fat, even in
the most common anatomical locations such as the extremities. The lesser
representation of these difficult tissue types adds up with the challenging
visual characteristics (small size, unclear tumor boundaries) and worsens the
learning capability of the model. Future iterations of this dataset should focus
on collecting more of these less representative and diverse tissue types.
§ CONCLUSION
In this work, we have described the creation of an MSTT dataset. We
have trained a segmentation model on this dataset and benchmarked its
performance on a publicly available dataset which achieves state-of-the-art result on MSTT segmentation. Results show that the segmentation models
work well for the Fat, myxoid, and nerve tumors but struggle to segment tumors
on fibrous and Vvscular tumors. The segmentation model is sensitive to the
volume of the tissue as well as the tumor location. Although this is the largest
tumor segmentation dataset created, the size of the dataset needs to be
increased further to make the segmentation models more robust. Special priority needs to
be given to the tumors with fibrous and vascular tissue types as they have
diverse anatomical locations and have challenging visual characteristics
compared to Myxoid, Fat, and Nerve tissue.
§ CORRELATION OF TUMOR INTENSITY STATISTICS AND DICE SCORE
Standard Deviation of intensity Fig. <ref>, Fig. <ref>,
Coefficient of Variation of intensity Fig. <ref>
<ref>,
The tumor tissue intensity statistics are shown in the following images. I need
some help in analyzing if these relations are meaningful. Or are there more
suggestions for exploring different features?
§ TRAINING PARAMETERS
List of augmentations used from the package:
=-0.3em
* ,
* ,
* ,
*
* ,
* ,
* ,
* ,
* ,
* ,
* .
List of model parameters
=-0.3em
* Learning rate: 1e-4
* Batch size: 16
* Epochs: 5
§ PREDICTION FAILURES
A list of images where the segmentation model has failed.
|
http://arxiv.org/abs/2409.02797v2 | 20240904151447 | Joint Beamforming for Backscatter Integrated Sensing and Communication | [
"Zongyao Zhao",
"Tiankuo Wei",
"Zhenyu Liu",
"Xinke Tang",
"Xiao-Ping Zhang",
"Yuhan Dong"
] | eess.SP | [
"eess.SP"
] |
[figure]labelformat=default,labelsep=period
Joint Beamforming for Backscatter Integrated Sensing and Communication
Zongyao Zhao^1,2, Tiankuo Wei^1,2, Zhenyu Liu^1, Xinke Tang^2, Xiao-Ping Zhang^1, Yuhan Dong^1,2,*
^1Shenzhen International Graduate School, Tsinghua University, Shenzhen, P. R. China
^2Pengcheng Laboratory, Shenzhen, P. R. China
Email: [email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected]
September 9, 2024
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Integrated sensing and communication (ISAC) is a key technology of next generation wireless communication. Backscatter communication (BackCom) plays an important role for internet of things (IoT). Then the integration of ISAC with BackCom technology enables low-power data transmission while enhancing the system sensing ability, which is expected to provide a potentially revolutionary solution for IoT applications. In this paper, we propose a novel backscatter-ISAC (B-ISAC) system and focus on the joint beamforming design for the system. We formulate the communication and sensing model of the B-ISAC system and derive the metrics of communication and sensing performance respectively, i.e., communication rate and detection probability. We propose a joint beamforming scheme aiming to optimize the communication rate under sensing constraint and power budget. A successive convex approximation (SCA) based algorithm and an iterative algorithm are developed for solving the complicated non-convex optimization problem. Numerical results validate the effectiveness of the proposed scheme and associated algorithms. The proposed B-ISAC system has broad application prospect in IoT scenarios.
Integrated sensing and communication (ISAC), backsactter communication (BackCom), passive tag.
§ INTRODUCTION
Integrated sensing and communication (ISAC) technology has recently emerged as a candidate technology of the next generation wireless network, which aims to integrate sensing and communication into one system to improve spectrum efficiency and hardware efficiency while providing sensing and communication service simultaneously<cit.>. As a promising technology for low-power communication, backscatter communication (BackCom) uses radio frequency (RF) tags to enable passive communication links by scattering RF signals to the reading device <cit.>.
The combination of BackCom with ISAC promises to combine the advantages of both technologies to enable energy-efficient passive communication while improving the sensing performance, especially for internet of things (IoT) applications. The authors in <cit.> further considered the detection requirement of RF tags and developed a joint beamforming design for ISAC system with backscatter tags, which minimize the total transmit power while meeting the tag detection and communication requirements. In summary, research on on joint beamforming schemes for backscatter integrated sensing and communication is scarce and thus requires further exploration of this novel paradigm. Moreover, existing schemes only focus on the optimization of the energy consumption, however the optimization of communication rate is still remained to be explored.
In this paper, we propose a novel backscatter-ISAC system and design a joint beamforming scheme to optimize communication rate under the sensing constraint and power budget. Specifically, we model the signals received at the tag, user equipment (UE), and access point (AP). Further, we establish the communication and sensing model and derive the metrics of communication and sensing performance respectively, i.e., communication rate and detection probability. Then, we propose a joint beamforming scheme aiming to optimize the UE communication rate while ensuring the tag is working normally. We develop a successive convex approximation (SCA) based algorithm and an alternating algorithm for the joint beamforming scheme. Extensive simulation validate the effectiveness of the proposed scheme and associated algorithms.
The proposed B-ISAC system fully utilizes the passive back scatter
properties of RF tags to achieve low-power communication and sensing capabilities. It is expected to has broad application
prospect in IoT scenarios.
The remainder of this paper is organized as follows. Sec. <ref> introduces the B-ISAC system and signal model. The joint beamforming scheme for communication rate optimization is proposed in Sec. <ref>. Numerical results are presented in Sec. <ref>. Finally, the conclusions are drawn in Sec. <ref>.
Notation: In this paper, boldface lower-case and upper-case letters denote vectors and matrices respectively. ℝ and ℂ represent the real and complex sets respectively. |·|, ||·||, and ||·||_F are absolute value, Euclidean norm, and Frobenius norm, respectively. ( ·)^-1 and ( ·)^† denote the inverse and pseudo inverse, respectively. ( ·)^T, ( ·)^*, and ( ·)^H represent transpose, complex conjugate, and Hermitian transpose, respectively. 𝔼( ·) represents statistical expectation. Re{·} returns the real part of a complex number. j is the imaginary unit, which means j^2=-1. 𝐈_N is the N× N identity matrix. 1=[1,1,…,1]^T∈ℝ^N. 𝐀≽0 means that 𝐀 is a positive semidefinite matrix. ⊙ represents the Hadamard product. diag(𝐚) returns a diagonal matrix, the vector composed of its diagonal elements is 𝐚. Tr(𝐀) and rank(𝐀) compute the trace and rank of matrix 𝐀 respectively. chol(𝐀) returns the Cholesky decomposition of matrix 𝐀. vec(𝐀) vectorizes matrix 𝐀 by column-stacking.
§ SYSTEM AND SIGNAL MODEL
§.§ System Model
As shown in Fig. <ref>, the B-ISAC system is composed of an AP, a UE, and a passive tag. The AP facilitates communication service to the UE, while providing sensing and communication support for the tag. The tag receives the signal transmitted by the AP to decode downlink data and modulates the uplink data onto the backscattered signal to achieve two-way data transmission. At the same time, the AP receives the backscattered signal and carries out further processes to obtain sensing information, such as position, status and etc. For the UE, the backscattered signal from the tag can be regarded as interference.
It is assumed that the AP is equipped with uniform linear transmit and receive arrays with N_t and N_r antenna elements, respectively. There is λ/2 inter-spacing between neighboring antenna elements, where λ is the carrier wavelength. Without loss of generality, we assume N_t ⩽ N_r. The tag and UE are equipped with a single antenna, respectively. It is also assumed that the self-interference between the transmit and receive arrays can be ignored.
𝐗∈ℂ ^N_t× L denote the transmitted signal at the AP, where L is the length of the transmitted signal such that L>N_t. 𝐗 can be expressed as
𝐗=𝐖𝐒=𝐰_u𝐬_u^H+𝐰_t𝐬_t^H+𝐖_s𝐒_s∈ℂ ^N_t× L,
where the joint beamforming matrix 𝐖 and the data augmentation matrix 𝐒 are respectively given by
𝐖 =[𝐰_u,𝐰_t,𝐰_1,𝐰_2,…,𝐰_N_t] ∈ℂ ^N_t× (N_t+2),
𝐒 =[ 𝐬_u,𝐬_t,𝐬_1, 𝐬_2 …,𝐬_N_t] ^H∈ℂ ^(N_t+2)× L,
where 𝐰_u∈ℂ ^N_t×1 and 𝐰_t∈ℂ ^N_t×1 are beamforming vectors for the UE and tag, respectively. Vectors 𝐬_u∈ℂ ^L×1 and 𝐬_t∈ℂ ^L×1 are data streams for the UE and tag, respectively. The additional probing stream [𝐬_1,𝐬_2,...,𝐬_N_t]^H =𝐒_s∈ℂ^N_t× L is introduced to extend the sensing degrees of freedom (DoF) of the transmit waveform. In this way, the DoF of the transmit waveform can be extended to its maximum value <cit.>. Note that these dedicated probing streams are deterministic and do not contain any information. Moreover, [𝐰_1,𝐰_2,...,𝐰_N_t] =𝐖_s∈ℂ^N_t× N_t is the dedicated auxiliary beamfoming matrix corresponding to the dedicated probing streams.
It is assumed that there is no correlation between dedicated probing streams 𝐒_s and data streams [𝐬_u,𝐬_t]^H. When L is sufficiently large, the random sample covariance matrix of the random communication data streams [𝐬_u,𝐬_t]^H can be approximately considered to satisfy 1/L[𝐬_u,𝐬_t]^H[𝐬_u,𝐬_t] ≈𝐈_2.
At the same time, the dedicated probing streams 𝐒_s is also elaborated to satisfy 1/L𝐒_s𝐒_s^H=𝐈_N_t. Therefore, 𝐒 satisfies
1/L𝐒𝐒^H≈𝐈_N_t+2.
The sample covariance matrix of waveform 𝐗 is given by
𝐑_𝐗=1/L𝐗𝐗^H≈𝐖𝐖^H∈ℂ ^N_t × N_t,
Then, the beam pattern of the transmit waveform is
P( θ) = 𝐚^H( θ) 𝐑_𝐗𝐚( θ),
where 𝐚( θ)= [ 1,e^jπsinθ,e^2jπsinθ,...,e^( N_t-1 ) jπsinθ] ^T ∈ℂ ^1 × N_t represents the steering vector of the transmit array, and θ represents the azimuth angle with respect to the array.
§.§ Backscatter Model
The signal 𝐲_t∈ℂ^1× L received by the tag is given by
𝐲_t =𝐡_f𝐗+𝐧_t
=𝐡_f𝐰_u𝐬_u^H+𝐡_f𝐰_t𝐬_t^H+𝐡_f𝐖_𝐬𝐒_𝐬+𝐧_t,
where 𝐡_f∈ℂ ^1× N_t denotes the channel from the AP to tag, 𝐧_t∈ℂ ^1× L is the receiver noise vector at the tag, which is assumed to follow a zero-mean complex Gaussian distribution as 𝐧_t∼𝒞𝒩( 0,σ_t^2𝐈_L ). On the right hand side of (<ref>), the first part 𝐡_f𝐰_u𝐬_u represents the interference caused by the UE communication data, the second part 𝐡_f𝐰_t𝐬_t represents the desired signal for the tag, the third part 𝐡_f𝐖_𝐬𝐒_𝐬 represents the interference caused by the dedicated probing stream, and 𝐧_t is the noise vector. Therefore, the signal-to-interference-plus-noise ratio (SINR) of the signal received at the tag can be expressed as
γ _t =𝔼( 𝐡_f𝐰_t𝐬_t^H^2 )/𝔼( 𝐡_f𝐰_u𝐬_u^H^2 ) +𝔼( 𝐡_f𝐖_s𝐒_s ^2 ) +𝔼( 𝐧_t^2 )
=|𝐡_f𝐰_t|^2/|𝐡_f𝐰_u|^2+𝐡_f𝐖_s ^2+σ _t^2.
Then, the tag modulates the uplink data onto the backscattered signal. The remodulated backscattered signal 𝐲_b∈ℂ ^1× L reflected by the tag is given by
𝐲_b= √(α)𝐲_t⊙𝐜_t
= √(α)𝐡_f𝐗⊙𝐜_t+√(α)𝐧_t⊙𝐜_t
where α is the backscatter modulation efficiency coefficient, and 𝐜_t is the uplink data satisfying 𝔼 [|𝐜_t|^2] =1.
Vector 𝐡_b∈ℂ ^N_r× 1 represents the channel from the tag to AP. Then, the backscattered signal received at the AP 𝐘_ap∈ℂ ^N_r × L can be expressed as
𝐘_ap= 𝐡_b𝐲_b+𝐍_ap
= √(α)𝐡_b𝐡_f𝐗⊙𝐜_t+√(α)𝐡_b𝐧_t⊙𝐜_t+𝐍_ap
where 𝐍_ap∈ℂ ^N_r× L is the noise matrix at the AP, which is assumed as vec(𝐍_ap)∼𝒞𝒩( 0,σ_ap^2𝐈_N_rL). Then, we use a receive combine vector 𝐰_r∈ℂ ^1× N_r to combine the signal of N_r antennas. The combined signal 𝐲̃_ap∈ℂ ^1× L is given by
𝐲̃_ap= 𝐰_r𝐘_ap
= √(α)𝐰_r𝐡_b𝐡_f𝐗⊙𝐜_t+√(α)𝐰_r𝐡_b𝐧_t⊙𝐜_t+𝐰_r𝐍_ap.
It is worth noting that the transmit signal 𝐗 is completely known to the AP, so
the uplink data can be decoded using 𝐗. We denote 𝐂_t=diag( 𝐜_t). Consequently, the received SINR at the AP is
γ _ap =𝔼( √(α)𝐰_r𝐡_b𝐡_f𝐗⊙𝐜_t ^2 )/𝔼( √(α)𝐰_r𝐡_b𝐧_t⊙𝐜_t ^2 ) +𝔼( 𝐰_r𝐍_AP ^2 )
=α𝔼( 𝐰_r𝐡_b𝐡_f𝐗𝐂_t𝐂_t ^H𝐗^H𝐡_f^H𝐡_b^H𝐰_r^H)/α𝔼( |𝐰_r𝐡_b𝐧_t𝐂_t𝐂_t^H𝐧_t^H𝐡_b^H𝐰_r^H| ) +𝐰_r ^2σ _ap^2
( a )≈α𝐰_r𝐡_b𝐡_f𝐖𝐖^H𝐡_f^H𝐡_b^H𝐰_r^H/α |𝐰_r𝐡_b|^2σ _t^2+𝐰_r ^2σ _ap^2,
where (a) holds because of 𝔼(𝐂_t𝐂_t^H )=𝐈_L and 𝐗𝐗^H≈ L𝐖𝐖^H.
When the channel 𝐡_b is known to the AP, we could use equal gain combining vector, i.e., 𝐰_r=𝐡_b^H/𝐡_b.
Since 𝐗 is completely known to the AP, the SINR of the received signal at the AP is related to the sample covariance matrix of 𝐗, not just related to the signal 𝐰_t𝐬_t, which is different from <cit.>. In other words, communication signal 𝐰_u𝐬_u and dedicated probing stream 𝐖_s𝐒_s can also help the tag to facilitate backscatter communication.
The data stream 𝐬_t contains a sequence that can activate the passive RF tag for data transmission. Therefore, in order to detect the tag and complete communication transmission, the SINRs at the tag and AP must be greater than a certain threshold to meet their respective sensitivity constraints.
§.§ UE Communication Model
Let 𝐡_u∈ℂ ^1× N_t and h_tu∈ℂ represent the channel between the AP and UE and the channel between the tag and UE, respectively. Then, the received communication signal at the UE 𝐲_u∈ℂ ^1× L can be expressed as
𝐲_u= 𝐡_u𝐗+h_tu𝐲_b+𝐧_u
= 𝐡_u𝐗+h_tu( √(α)𝐡_u𝐗⊙𝐜_t+√(α)𝐧_t⊙𝐜_t) +𝐧_u
= 𝐡_u𝐰_u𝐬_u^H+𝐡_u𝐰_t𝐬_t^H+∑_i=1^N_t𝐡_u𝐰_i𝐬_i^H
+h_tu√(α)𝐡_f𝐰_u𝐬_u^H⊙𝐜_t+h_tu√(α)𝐡_f𝐰_t𝐬_t^H⊙𝐜_t
+∑_i=1^N_th_tu√(α)𝐡_f𝐰_i𝐬_i^H⊙𝐜_t+h_tu√(α)𝐧_t⊙𝐜_t+𝐧_u.
There are three parts in signal 𝐲_u, containing the signal 𝐡_u𝐗 from the AP, interference h_tu𝐲_b from tag, and 𝐧_u the noise vector at the UE, where 𝐧_u is assumed as 𝐧_u∼𝒞𝒩( 0,σ_u^2𝐈_L ). The SINR of the signal received by UE is given in (<ref>). Thus, the communication rate of UE can be expressed as
R=log _2( 1+γ _u)
§.§ Sensing Model
Tag detection aims to determine whether the tag is present in the environment. The signal received by the AP can be used to perform a signal detection process. We formulate the tag detection problem as a hypothesis testing problem as follows,
ℋ _0:𝐲̃_ap=𝐰_r𝐍_AP
ℋ _1:𝐲̃_ap=[√(α)𝐰_r𝐡_b𝐡_f(𝐗+𝐧_t)]⊙𝐜_t+𝐰_r𝐍_ap,
where ℋ _0 means there is no backscatter signal from the tag, ℋ _1 means there is tag echo.
According to Neyman-Pearson criterion <cit.>, the following detector can be obtained
Re{𝐲̃_ap (√(α)𝐰_r𝐡_b𝐡_f𝐗)^H }[ ℋ_1⩾; ℋ_0<; ] η,
where η is the detection threshold. The detection probability of the tag denoted as P_D is given by <cit.>
P_D=1/2erfc{erfc^-1( 2P_F) -√(γ _ap)},
where erfc( x ) =2/√(π)∫_x^∞e^-t^2dt is the complementary error function. P_F is the probability of false alarm. According to (<ref>), the tag detection probability P_D is a monotonically increasing function of γ _ap. Therefore, the SINR γ _ap of the echo signal can actually represent the ability to detect the tag of the B-ISAC system. In other words, the uplink communication capability of the tag is directly proportional to the ability of the tag detection.
§ JOINT BEAMFORMING SCHEME FOR COMMUNICATION RATE OPTIMIZATION
In this section, we focus on the beamforming design for optimizing the communication rate of the UE. we consider the case that the AP has detected the tag and has obtained the information of the channel 𝐡_b and 𝐡_f. We consider the problem of maximizing the communication rate of communication UE under the constraints of energy budget, SINRs at the tag and AP. The SINR constraint at the tag ensures that the tag remains activated, while the SINR constraint at the AP ensures the detection probability and uplink communication rate of the tag.
The optimization problem is given by
( 𝒫 _1 ) maximize_ 𝐖 log _2( 1+γ _u)
subject to γ _t⩾γ _tth
γ _ap⩾γ _apth
Tr( 𝐖𝐖^H ) ⩽ P_T,
where γ _t⩾γ _tth is to set a threshold γ _tth for the to ensure the tag be activated, γ _ap⩾γ _apth is to set a threshold γ _apth to ensure the uplink communication rate and remain the tag be detected. P_T is the total transmit power, Tr( 𝐖𝐖^H ) ⩽ P_T is a total power budget constraint for joint beamforming matrix.
Since only a single communication UE is considered, optimizing the rate is equivalent to optimizing the communication SINR γ _u. However, according to (<ref>), γ _u has a complicated fractional form. By introducing an auxiliary variables y, we can convert the fractional objective function into polynomial form by quadratic transform <cit.>. The new objective function ℱ( 𝐖,y ) is given by (<ref>). Then the optimization problem can be expressed as
( 𝒫 _1.1) maximize_ 𝐖,y ℱ( 𝐖,y )
subject to γ _t⩾γ _tth
γ _ap⩾γ _apth
Tr( 𝐖𝐖^H ) ⩽ P_T,
where ℱ( 𝐖,y ) is a conditionally concave
function with respect to each variable given the other. Therefore, we can develop an alternating optimization method to solve this problem.
Update y: Given 𝐖, the optimization for the auxiliary variables y is a convex problem without constraints, given as
( 𝒫 _1.1.1) maximize_ y ℱ( 𝐖,y ).
Its optimal solution can be obtained straightforwardly by setting ∂ℱ/∂ y=0. The optimal y^* is given by (<ref>).
Update 𝐖: Given y, the optimization problem for updating 𝐖 can be expressed as
( 𝒫 _1.1.2) maximize_ 𝐖=[𝐰_u,𝐰_t,𝐰_1,…,𝐰_N_t] ℱ( 𝐖,y )
subject to γ _t⩾γ _tth
γ _ap⩾γ _apth
Tr( 𝐖𝐖^H ) ⩽ P_T.
Note that the objective function is concave with respect to 𝐖. The main challenge is the non-convex constraints (<ref>) and (<ref>). According to (<ref>), the
(<ref>) can be rewritten as
1/γ _tth|𝐡_f𝐰_t|^2⩾ |𝐡_f𝐰_u|^2+∑_i=1^N_t|𝐡_f𝐰_i|^2+σ _t^2.
By taking root square of both side of (<ref>), the constraint becomes a second-order cone form, which is given by
√(1/γ _tth)Re{𝐡_f𝐰_t}⩾[ 𝐡_f𝐰_u; 𝐡_f𝐰_1; 𝐡_f𝐰_2; ⋮; 𝐡_f𝐰_N_t; σ _t; ] .
The original form of left hand side of (<ref>) after taking root square is √(1/γ _tth)| 𝐡_f𝐰_t|. While we find that any phase rotation does not affect the absolute value, i.e., | 𝐡_f𝐰_t|= | 𝐡_f𝐰_te^jθ|. Without changing the absolute value, we can always find a 𝐰_t to make the 𝐡_f𝐰_t positive and real. Therefore, the constraint (<ref>) can be rewritten as (<ref>).
According to (<ref>), the constraint (<ref>) can be rewritten as
α/γ _apth|𝐰_r𝐡_b|^2
Tr( 𝐅𝐖𝐖^H ) ⩾[ √(α)|𝐰_r𝐡_b|σ _t; 𝐰_rσ_ap; ]^2 .
where 𝐅=𝐡_f^H𝐡_f.
Now, the main limitation for solving this problem is that constraint (<ref>) is not an affine constraint. In order to address this issue, we adopt successive convex approximation (SCA) based method <cit.> to solve it. Given a 𝐖^, the convex approximation of (<ref>) at 𝐖^ is given by
α/γ _apth|𝐰_r𝐡_b|^2 Tr( 𝐅𝐖^𝐖^ H)
+ α/γ _apth|𝐰_r𝐡_b|^2Tr[ 𝐖^H𝐅(𝐖-𝐖^)+𝐖^T𝐅^T(𝐖-𝐖^)^* ]
⩾ [ √(α)|𝐰_r𝐡_b|σ _t; 𝐰_rσ_ap; ]^2 .
The convex approximation problem of ( 𝒫 _1.1.2.1) can be written as follows,
( 𝒫 _1.1.2.1) maximize_ 𝐖 ℱ( 𝐖,y )
subject to (<ref>)
(<ref>)
Tr( 𝐖𝐖^H ) ⩽ P_T.
The SCA based algorithm for solving ( 𝒫_1.1.2.1) is summarized in Algorithm <ref>. Based on the above derivations, the optimal joint beamformer can be obtained by updating y and 𝐖 iteratively. The alternating joint beamforming design algorithm for communication rate optimization is summarized in Algorithm <ref>. With appropriate initialization of y and 𝐖 in the feasible space, we could iteratively update each variable until convergence.
§ NUMERICAL RESULTS
In this section, we present numerical results of the proposed beamforming schemes. We set the transmit and receive array with the same elements number, i.e., N_t =N_r=16. We assume that the noise power at the AP, tag, and UE are equal as σ _ap^2=σ _t^2=σ _u^2=-40 dBm. We use the line-of-sight (LOS) channel model in the simulation, which means that if the tag is at angle θ_k, the corresponding channel 𝐡_f is α_f𝐚(θ_k) and the channel 𝐡_b is α_b𝐛(θ_k). If the UE is at angle θ_j, the corresponding channel 𝐡_u is α_u𝐚(θ_j). 𝐚 and 𝐛 are the steering vector of the transmit and receive array respectively. α_f, α_b, and α_u are channel fading coefficients respectively.
First, we analyze the convergence of the proposed Algorithm <ref> and Algorithm <ref>. The convergence performance of the algorithms are presented in Fig. <ref>(a) and Fig. <ref>(b), respectively. The achievable communication rate versus the number of iterations under different setting is present in the figure. Algorithm <ref> converges after 10 iterations. In particular, after 5 iterations, the change of rate is already minor. Algorithm <ref> converges after 5 iterations. In particular, after 2 iterations, the change of rate is already minor. The above simulation results fully demonstrate the quick convergence performance of the proposed algorithms.
Then, we evaluate the beampattern of the proposed joint beamforming scheme for communication rate optimization (J.B.C). We set the transmit power of the AP as P_T=0 dBm, and set h_tu=0.5. We set the channels to 𝐡_f=0.8𝐚(π/4), 𝐡_b=0.8𝐛(π/4), and 𝐡_u=0.8𝐚(7π/10). The SINR threshold at the tag is set to γ_tth=15dB to ensure the tag is activated. The SINR threshold at the AP is set to γ_tth=12dB to make sure the tag can be detected and communicate with the AP. The beampattern of the proposed J.B.C scheme is presented in Fig. <ref>. J.B.C (Proposed, Overall Signal), J.B.C (Proposed, Communication Signal), J.B.C (Proposed, Tag Signal), and J.B.C (Proposed, Dedicated Probing Signal) are beampatterns of the overall signal, the communication signal, the tag signal, and the dedicated probing signal produced by the proposed J.B.C scheme (solving problem ( 𝒫 _1 )) respectively. Orthogonal Beam is the full orthogonal beamforming scheme. We can observe that the communication beam forms a high gain beam at the communication UE direction and a notch at the tag direction. In order to maximize the UE communication rate, the tag signal and dedicated probing signal produce a high-gain beam in the tag direction and a notch in the communication direction.
The achievable communication rate with respect to the
transmit power under different settings is depicted in Fig. <ref>. We can also find that the communication performance of the proposed J.B.C scheme increases with the transmit power. We can also observe that the higher the SINR constraints, the lower the achievable communication rate, which reflects the power competition between the tag and UE.
In this work, we only consider the UE communication performance optimization in B-ISAC systems. The design for more task modes of B-ISAC systems is studied in <cit.>. In addition, the design of B-ISAC systems with multiple UEs and multiple RF tags is also a very interesting topic, which we leave as our future work.
§ CONCLUSION
We proposed an integrated BackCom and ISAC system called B-ISAC system in this work. We provided a theoretical analysis of the communication and sensing performance of the system. A joint beamforming scheme is designed to optimize the UE communication rate under the constraints of energy budget, SINRs at the tag and AP to ensure the tag sensing and communication performance. Moreover, efficient algorithms were developed for solving the complicated optimization problem. Simulation results validate the effectiveness of the proposed algorithms as well as illustrating the performance trade-off between communication and sensing performance.
§ ACKNOWLEDGMENT
The work was supported in part by the National Natural Science Foundation of China under Grant 62388102, and the GuangDong Basic and Applied Basic Research Foundation under Grant 2022A1515010209. The corresponding author is Dr. Yuhan Dong.
00
Hassan2016 A. Hassanien, M. G. Amin, Y. D. Zhang, and F. Ahmad, “Signaling strategies for dual-function radar communications: An overview,” IEEE Aerosp. Electron. Syst. Mag., vol. 31, no. 10, pp. 36–45, Oct. 2016.
Zhao2022 Z. Zhao, X. Tang, and Y. Dong, “Cognitive waveform design for dual-functional MIMO radar-communication systems,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), Dec. 2022, pp. 5607–5612.
Liu2023 F. Liu, et al., “Seventy years of radar and communications: The road from separation to integration,” IEEE Signal Process Mag., vol. 40, no. 5, pp. 106–121, Jul. 2023.
LiuX2020 X. Liu, T. Huang, N. Shlezinger, Y. Liu, J. Zhou, and Y. C. Eldar, “Joint transmit beamforming for multiuser MIMO communications and MIMO radar,” IEEE Trans. Signal Process., vol. 68, pp. 3929–3944, Jun. 2020.
Zhao2024 Z. Zhao, et al., “Joint beamforming scheme for ISAC systems via robust Cramér–Rao bound optimization,” IEEE Wireless Commun. Lett., vol. 13, no. 3, pp. 889–893, Jan. 2024.
Niu2019 J. -P. Niu and G. Y. Li., “An overview on backscatter communications,” J. Commun. Inf. Networks , vol. 4, no. 2, pp. 1–14, Jun. 2019.
Jiang2023 T. Jiang, et al., “Backscatter communication meets practical battery-free internet of things: A survey and outlook,” IEEE Commun. Surveys Tuts., vol. 25, no. 3, pp. 2021–2051, third quarter 2023.
Gala2023 D. Galappaththige, C. Tellambura, and A. Maaref., “Integrated sensing and backscatter communication,” IEEE Wireless Commun. Lett., vol. 12, no. 12, pp. 514–528, Dec. 2023.
Luo2023 H. Luo, U. Demirhan, and A. Alkhateeb., “ISAC with backscattering RFID tags:
Joint beamforming design,” to appear in IEEE Intern. Commun. Conf. (ICC), arXiv:2401.09761.
Biguesh2006 M. Biguesh, and A. B. Gershman, “Training-based MIMO channel
estimation: A study of estimator tradeoffs and optimal training signals,” IEEE Trans. Signal Process., vol. 54, no. 3, pp. 884–893, Feb. 2006.
Neyman1992 J. Neyman and E. S. Pearson, “On the problem of the most efficient tests of statistical hypotheses,” Philosophical Trans. the Royal Society of London, vol. 231, no. 694–706, pp. 289–337, 1992.
Tang2022 B. Tang and P. Stoica, “MIMO multifunction RF systems: Detection performance and waveform design,” IEEE Trans. Signal Process., vol. 70, pp. 4381-4394, Aug. 2022.
Shen2018 K. Shen and W. Yu, “Fractional programming for communication systems-part I: Power control and beamforming,” IEEE Trans. Signal Process., vol. 66, no. 10, pp. 2616–2630, May 2018.
Scutari2014 G. Scutari, F. Facchinei, P. Song, D. P. Palomar and J. S. Pang, “Decomposition by partial linearization: Parallel optimization of multi-agent systems,” IEEE Trans. Signal Process., vol. 62, no. 3, pp. 641–656, Feb. 2014.
Zhaoz2024 Z. Zhao, Y. Dong, T. Wei, X.-P. Zhang, X. Tang, and Z. Liu “B-ISAC: Backscatter integrated sensing and communication for 6G IoE applications,” arXiv:2407.19235.
IEEEtran
|
http://arxiv.org/abs/2409.02725v1 | 20240904135948 | Pre-training data selection for biomedical domain adaptation using journal impact metrics | [
"Mathieu Laï-king",
"Patrick Paroubek"
] | cs.CL | [
"cs.CL",
"I.2.7"
] |
Reply to Comment on “A slightly oblate dark matter halo revealed by a retrograde precessing Galactic disk warp"
Haibo Yuan
September 9, 2024
===============================================================================================================
§ ABSTRACT
Domain adaptation is a widely used method in natural language processing (NLP) to improve the performance of a language model within a specific domain. This method is particularly common in the biomedical domain, which sees regular publication of numerous scientific articles. PubMed, a significant corpus of text, is frequently used in the biomedical domain. The primary objective of this study is to explore whether refining a pre-training dataset using specific quality metrics for scientific papers can enhance the performance of the resulting model. To accomplish this, we employ two straightforward journal impact metrics and conduct experiments by continually pre-training BERT on various subsets of the complete PubMed training set, we then evaluate the resulting models on biomedical language understanding tasks from the BLURB benchmark. Our results show that pruning using journal impact metrics is not efficient. But we also show that pre-training using fewer abstracts (but with the same number of training steps) does not necessarily decrease the resulting model's performance.
§ INTRODUCTION
Advances in deep learning for natural language processing (NLP) in recent years have enabled transfer learning to develop <cit.>, particularly since the creation of Transformers <cit.>.
One type of transfer learning aims to start with a pre-training phase where the model learns the general language structure and then a second phase where the model can be fine-tuned for a specific task. In the context of deep learning for NLP, this method avoids re-training a model from scratch for each new task, starting with a model that already has general language knowledge. These pre-trained models generally use a large corpus of text.
A specialized domain, such as finance or the biomedical domain, may contain numerous tasks. In the case of language, a specialized domain has a specific vocabulary containing terms more rarely found in general texts. We can observe this phenomenon when looking at tokens produced by a biomedical tokenizer against a general tokenizer <cit.>. Moreover, tasks may require domain-specific knowledge not found in general sources. So, to improve the performance of a model previously trained on a general domain to a specific domain, it is interesting to use a corpus specific to the domain to which we wish to adapt our model.
Most of the data used for pre-training in the biomedical field are research articles and papers that can be either abstracts, full texts, or a combination of both. This data generally originates from large public databases such as PubMed or PubMedCentral (for full-text articles). However, to our knowledge, no study has examined the selecting subsets of these large databases for pre-training using metrics specific to scientific papers. That leads us to our research questions: Can a language model be adapted to the biomedical domain by efficiently selecting scientific documents in the pre-training data while maintaining or improving its performance ? Is the journal impact factor a good metric to select scientific documents for pre-training ?
This paper presents our experiments on adapting the pretrained BERT-base model to the biomedical domain. We get the PubMed January 2024 baseline corpus and define different subset configurations using journal impact metrics: h-index <cit.> and Scimago Journal Rank or SJR <cit.>. We then perform continual pre-training from the BERT-base model <cit.> and evaluate it on several tasks from the BLURB benchmark <cit.>.
§ RELATED WORK
§.§ Domain-adaptive and domain-specific pre-training for the biomedical domain
The adaptation of neural models to the biomedical domain has been extensively studied in recent years, focusing on BERT-type models and, more recently, large generative language models. We distinguish two main categories regarding the pre-training data:
* Mixed-domain pre-training, where the model has seen data from different domains during the pre-training: it can either be a model that has been pre-trained on a general corpus and then trained on in-domain data or a model trained simultaneously on data from multiple domains, such as biomedical and clinical for example <cit.>.
* Domain-specific pre-training, where the model only sees data from a single domain during pre-training. The hypotheses are that by using a domain-specific vocabulary, the models learn more accurate representations of specific in-domain terms (that would be divided by the sub-word tokenization with a general corpus) and that it reduces noise introduced by text completely unrelated to the domain <cit.>.
§.§ Pre-training data quality for large language models
Several works focus on selecting sequences using quality metrics for pre-training Transformer models in the general domain, particularly with the advent of large language models and the evolution of the size of pre-training datasets for these models <cit.>.
The adaptation of large language models using scientific articles has been largely studied. However, only a few have emphasized the quality of scientific articles used. For the Galactica model <cit.>, they only mention applying "several quality filters, including excluding papers from journals with certain keywords and also excluding papers with a low journal impact factor". Most other models that used PubMed or PubMedCentral for pre-training do not mention any specific selection of data at the document level; most focus on preprocessing steps at the content level (bibliography references, authors, figures and tables, etc.) when dealing with full-text articles <cit.>.
§ METHODS
§.§ Methodology
We use a similar methodology as <cit.>, with some small modifications :
Let D be a large dataset containing documents and ξ a metric assigning a score to a document. We build a subset P_cξ by adding instances that fit our selection criteria c :
P_cξ={d_i∈ D | c_0ξ≤ξ(d_i)) ≤ c_1ξ}
Where c_0ξ and c_1ξ are the lower and upper bound for the criteria c and the metric ξ. For each metric, we consider two selection criteria: keeping top or middle part of the distribution of the metric[we do not use the bottom part because in our case, for the SJR metric, more than 25% of the dataset had the same value : 0, so the percentiles for the bottom part would be 0 but include more than 25% of the corpus] of D as the data to be kept. This serves as verifying if the model learns better with high quality documents (defined by the metric, for our metrics, higher is better). We keep either 25% or 50% of the documents in D. So for instance, if we take the 25 % of the middle part of the distribution for the metric ξ, we should compute the 37.5 % and 62.5 % percentiles with respect to metric ξ, which corresponds to c_0ξ and c_1ξ, and keep the documents between these two percentiles.
Then, we tokenize each document in the subset, and we concatenate them into sequences of length equal to the model's context length. This differs from <cit.> as we do the filtering before tokenization (because our metrics are applied on a document, not on a sequence of tokens). These sequences are then used to pre-train a model. Moreover, our metrics do not rely on a reference language model.
The goal is then to pre-train a model on a subset of the whole training set while retaining or improving the model's performance.
§.§ Pre-training corpus
We use the PubMed Baseline corpus comprising all article abstracts deposited on the PubMed database until January 2024. Using PubMed metadata, we filter out abstracts that are not in English, abstracts whose text is not available, and abstracts whose ISSN journal identifier is not present (we filter this to have enough abstracts with a score as our pruning metrics are based on journal impact). After filtering, the total corpus is comprised of 15.9B tokens.
We did not perform a pre-training experiment using the non-filtered PubMed set because we did not have enough articles with journal identifiers to obtain convenient metric percentiles. Still, we expect this filtering to already impact the overall quality of the corpus.
§.§ Quality metrics
The nature of the datasets used for general model training (by which we mean models that are not domain-specific) differs from those used in the biomedical field. They are generally huge datasets comprising texts extracted from the Internet on various sites. In our case, these are research articles from the same database. This presupposes a text quality that is adequate in certain respects (generally correct syntax and formal language, unlike texts found on the Internet).
We wanted to use metrics specific to scientific articles that have meaning for scientific article readers. So, we decided to use journal impact metrics. We used the metadata available on PubMed. This type of metric can provide insight into the probable impact that a paper can have but does not necessarily ensure scientific quality. However, we believe filtering with impact metrics in a large corpus can help reduce the noise, help the model learn biomedical language, and learn biomedical knowledge more efficiently. We use the h-index <cit.> and the SJR <cit.> as the data is publicly available on the Scimago website[<https://www.scimagojr.com/journalrank.php>]. For comparison, we also perform a random score assignation on all papers from the dataset; we do not perform multiple random assignations to limit the compute cost.
We computed the percentiles for SJR and h-index and, as there were zero values for the SJR index (for the 12.5% and 25% percentiles), we did not perform all the pre-trainings for the mid criteria, we only considered the 25 % subset. This is also why we did not consider the bottom percentiles. We also did not perform the pre-training on the complete set because of time and resource constraints, but we plan to do it in future work.
§.§ Pre-processing
We tokenize the whole dataset and concatenate the text of the different abstracts into sequences of length 512 tokens (maximum sequence length for the model we use: BERT <cit.>). We keep 5 % of this set as validation data.
§.§ Model and pre-training
We use the original BERT-base model <cit.>, continue pre-training on the defined datasets with masked language modeling, and compare the resulting models. For each pre-training (on each subset), we fix a shared global number of steps so that each model sees the same quantity of tokens: we select the number of steps as the total number needed for one epoch on the entire PubMed corpus. For the runs with the subsets, the model will run multiple epochs until it reaches the total number of steps, with data shuffling between epochs (for example, two epochs for the run where we take the top 50% of PubMed abstracts with respect to h-index).
We train with a sequence length of 512 and a batch size of 8192[We perform gradient accumulation and data parallelism to get this batch size.], which gives us a total of 3598 steps. We use a linear schedule with 10 % warmup and a peak learning rate of 1e-4. For the other hyperparameters, we follow the original BERT paper. We train our different models on 2 NVIDIA A100 GPUs.
§.§ Evaluation and fine-tuning
We evaluate the produced pre-trained models on some of the datasets from the BLURB benchmark <cit.>. We also re-evaluate the BERT-base model to ensure a consistent evaluation with our fine-tuning scripts. We excluded the PICO and Sentence Similarity tasks (EBM-PICO <cit.> and BIOSSES <cit.>), for which we had trouble reproducing similar and consistent results across runs to those obtained in the BLURB paper, as they did not share any code to perform the fine-tuning and evaluation. So, we are left with the following evaluation tasks :
* Named entity recognition (NER) : BC5-chem & BC5-disease <cit.>, BC2GM <cit.>, JNLPBA <cit.> and NCBI-disease <cit.>. We evaluate the models for NER tasks using the entity-level F1 score. We model the entities using BIO tags.
* Relation extraction : ChemProt<cit.>, DDI <cit.>, GAD <cit.>. We evaluate the models for relation extraction using the micro F1 score. We use entity dummyfication with start and end tags and use the [CLS] token to classify relations.
* Document classification : HoC <cit.>, for which we measure the micro F1 score.
* Question answering : PubMedQA <cit.> and BioASQ Task 7b <cit.>. We evaluate these tasks using accuracy.
§ RESULTS AND DISCUSSION
To limit random effects, we perform the fine-tuning multiple times with different random seeds, as described in the BLURB paper: using five seeds for all datasets except for BioASQ and PubMedQA, for which we use ten seeds (because they are smaller in size). We then report the average performance across the different seeds for each dataset in the table <ref>.
§.§ Improvement against non biomedical model
All models trained on biomedical data perform better than the base model trained only on general-domain data. However, for a fair comparison, we should train it for the same amount of steps on non-biomedical data.
§.§ Are journal impact metrics important for the model ?
We obtain the best results in micro and macro averages for the model trained on the top 50% of the entire set with respect to the h-index of the journal in which abstracts have been published. Overall, the h-index metric performs better than SJR, which may be because the SJR percentile values are very close to each other, so the quality differences are less important.
However, the performance differences are low when we compare to the SJR metric or even when selecting abstracts randomly, regardless of the proportion of abstracts we keep. So, journal impact metrics do not seem important when selecting pre-training data from a corpus of scientific articles. We then should find more appropriate metrics to define the quality of a single abstract or test it on a full-text article corpus (so that the impact of a single document is higher).
§.§ Is it better to pre-train a model using more abstracts ?
If we compare the performance difference when training with 25% of the data against 50%, we globally have better performances (except for the random selection), but these differences are not significant. So, it would be interesting to perform further pre-training experiments using different subset sizes to investigate which number of documents is optimal for the domain adaptation.
§ CONCLUSION
This paper presents our early experiments on selecting the pre-training data for the biomedical domain. We show that the journal impact metrics are not better than the random selection at a fixed number of training steps. We also observe that reducing the number of abstracts in the training set does not necessarily decrease the final model performance and show the need to investigate how many documents we need to pre-train a model without losing performance.
Further directions include finding better metrics (or combinations of metrics) to assess the quality of a document in the pre-training corpus, investigating metrics at a different level (at the corpus level using various mixtures of biomedical domains), and using a corpus of full-text articles.
§ ACKNOWLEDGMENTS
This project was provided with computer and storage resources by GENCI at IDRIS thanks to the grant 20XX-AD011014707 on the supercomputer Jean-Zay's A100 partition .
|
http://arxiv.org/abs/2409.02794v1 | 20240904150932 | Josephson diode effect in one-dimensional quantum wires connected to superconductors with mixed singlet-triplet pairing | [
"Abhiram Soori"
] | cond-mat.supr-con | [
"cond-mat.supr-con",
"cond-mat.mes-hall"
] | |
http://arxiv.org/abs/2409.03585v1 | 20240905144221 | Extragalactic Stellar Tidal Streams: Observations meet Simulation | [
"Juan Miro-Carretero",
"Maria A. Gomez-Flechoso",
"David Martinez-Delgado",
"Andrew P. Cooper",
"Santi Roca-Fabrega",
"Mohammad Akhlaghi",
"Annalisa Pillepich",
"Konrad Kuijken",
"Denis Erkal",
"Tobias Buck",
"Wojciech A. Hellwing",
"Sownak Bose"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Departamento de Física de la Tierra y Astrofísica, Universidad Complutense de Madrid, Plaza de las Ciencias 2, E-28040 Madrid, Spain
Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden, The Netherlands
Instituto de Física de Partículas y del Cosmos (IPARCOS), Fac. CC. Físicas, Universidad Complutense de Madrid, Plaza de las Ciencias, 1, E-28040 Madrid, Spain
Centro de Estudios de Física del Cosmos de Aragón (CEFCA), Unidad Asociada al CSIC, Plaza San Juan 1, 44001 Teruel, Spain
ARAID Foundation, Avda. de Ranillas, 1-D, E-50018 Zaragoza, Spain
Instituto de Astrofísica de Andalucía, CSIC, Glorieta de la Astronomía, E-18080, Granada, Spain
Institute of Astronomy and Department of Physics, National Tsing Hua University, Kuang Fu Rd. Sec. 2, Hsinchu 30013, Taiwan
Center for Informatics and Computation in Astronomy, National Tsing Hua University, Kuang Fu Rd. Sec. 2, Hsinchu 30013, Taiwan
Lund Observatory, Division of Astrophysics, Department of Physics, Lund University, Box 43, SE-221 00 Lund, Sweden
Max Planck Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany
Department of Physics, University of Surrey, Guildford GU2 7XH, UK
Universität Heidelberg, Interdisziplinäres Zentrum für Wissenschaftliches Rechnen, Im Neuenheimer Feld 205, D-69120 Heidelberg, Germany
Universität Heidelberg, Zentrum für Astronomie, Institut für Theoretische Astrophysik, Albert-Ueberle-Straße 2, D-69120 Heidelberg, Germany
Center for Theoretical Physics, Polish Academy of Sciences, Al. Lotnik ow 32/46, 02-668 Warsaw, Poland
Institute for Computational Cosmology, Department of Physics, Durham University, South Road, Durham DH13LE, United Kingdom
Extragalactic Stellar Tidal Streams: Observations meet Simulation
Miró-Carretero et al.
According to the well established hierarchical framework for galaxy evolution, galaxies grow through mergers with other galaxies and the ΛCDM cosmological model predicts that the stellar halos of galaxies are rich in remnants from minor mergers. The Stellar Streams Legacy Survey has provided a first release of a catalogue with a statistically significant sample of stellar streams in the Local Universe that can be used to study minor mergers and test the cosmological models.
The main objective is to compare the results of the observations of stellar tidal streams with the predictions of state-of-the-art cosmological simulations regarding the formation of stellar streams up to a redshift z < 0.02, according to the ΛCDM model.
We use the predictions of the cosmological simulations Copernicus Complexio, TNG50 of the IllustrisTNG project and Auriga to generate 225 mock-images of nearby halos at a distance of 70 Mpc, and search for stellar streams. We compare the obtained stream frequency and characteristics with those obtained from the Stellar Streams Legacy Survey.
We find good agreement between the results of analysing real images from the Dark Energy Survey and mock-images from cosmological simulations. We obtained predictions for the detection rate of stellar streams to a surface brightness limit of 35 mag arcsec^-2
The cosmological simulations predict that for a surface brightness limit of 32 mag arcsec^-2 a frequency of almost 70% in the detection of streams around galaxies can be achieved.
Extragalactic Stellar Tidal Streams: Observations meet Simulation
Juan Miró-Carretero 1,2, Maria A. Gómez-Flechoso 1,3, David Martínez-Delgado4,5,6ARAID Fellow, Andrew P. Cooper 7,8, Santi Roca-Fàbrega 9, Mohammad Akhlaghi 4, Annalisa Pillepich 10, Konrad Kuijken 2, Denis Erkal 11, Tobias Buck 12,13, Wojciech A. Hellwing 14, Sownak Bose 15
===========================================================================================================================================================================================================================================================================================
§ INTRODUCTION
According to the well established hierarchical framework for galaxy evolution, galaxies grow through mergers with other galaxies. These mergers can be major mergers, when the merging galaxies are of similar stellar mass; a mass ratio > 1/3 is a generally accepted threshold, see e.g. <cit.>, and minor mergers, when the host galaxy accretes a dwarf galaxy in its halo.
The Lambda Cold Dark Matter (ΛCDM) cosmological model predicts that the stellar halos of galaxies are rich in remnants from minor mergers that, in the Local Universe, on the basis of observations and simulations, are expected to be more frequent than major mergers <cit.>.
The detection and study of extragalactic stellar tidal streams contributes to augment the stream census, mostly built so far from streams detected in the Milky Way and Local Volume <cit.>, and helps us to contrast their frequency and characteristics with the predictions of the ΛCDM model on a statistically sound basis (the Local Volume is considered here to be a spherical region with a radius of 11 Mpc around the Milky Way or up to a radial velocity of redshift of z < 0.002).
This motivates the search
for streams beyond the Local Volume, and up to a distance for which surveys are available with the required depth <cit.>.
Due to their low surface brightness
much less data can be gathered for each individual stream in a survey of distant hosts, in comparison to the
streams in our galaxy or in the Local Volume.
In particular, star-by-star photometric and kinematic measurements are not possible for
streams at these distances.
A number of relevant surveys of tidal features beyond the Local Volume have been reported in the last decade: <cit.>. <cit.> present the results of visual inspection of a sample of 838 edge-on galaxies using images from three surveys: SDSS Strip-82, Subaru HSC and DESI (DECals, MzLS, BASS). This study, like our present work, was motivated to construct a deep photometric sample and obtain better statistics of tidal structures in the Local Universe in order to compare with cosmological simulations. The definition of tidal features used in that study includes also disc deformations and tidal tails, typical of major mergers. Their results will be discussed further in Section <ref>.
Simulations are required to interpret the observations and to infer the
physics that determines the
origin and evolution of
streams.
The commonly used analytical/semi-empirical methods to study stellar streams formation in the Local Universe cannot be used when the observational data is scarce and when the central system’s mass distribution evolves with time. So, to understand the formation of the unresolved stellar streams at large distances, accounting for the full temporal evolution, cosmological simulations are needed. However,
cosmological simulations
also have disadvantages.
They cannot reach the high spatial and mass resolution of bespoke models. Furthermore, the impact of uncertainties in sub-grid models of baryonic astrophysics (e.g. star formation and supernova feedback) is poorly known on the very small scales probed by tidal streams.
Cosmological simulations nevertheless provide a powerful means not only to understand the origin and evolution of streams as observed in the surveys above, but also
to predict their photometric characteristics.
State-of-the-art models are now detailed enough to be constrained by stream observations, and can also inform assessments of the design and completeness of the observations themselves.
We already know that future surveys will need to be able to produce deeper images than available today, if new extra galactic streams, presumed to exist in great numbers, are to be discovered.
However, it remains unclear what
critical image depth
is
needed to significantly increase
the number of known streams.
This is important
to motivate and plan for
surveys such as ESA's space mission Euclid <cit.> and the Vera C. Rubin Observatory's Legacy Survey of Space and Time <cit.>.
In the context of minor mergers, predictions for streams and their progenitor galaxies are close to the limit of the capabilities of current large-volume cosmological simulations, and the robustness of those predictions has not yet been explored in detail. The use of cosmological simulations to plan for and interpret new surveys therefore has to proceed in tandem with their validation against existing extragalactic stream data, mostly at surface brightness limits brighter than ∼ 29 mag arcsec^-2 <cit.>.
The use of cosmological simulations to study tidal features has also increased significantly in the recent times <cit.>
Relevant work on stream detectability using mock-images from cosmological simulations is reported in <cit.>. The authors have inspected surface brightness maps generated from 30 Auriga project simulations <cit.> of Milky Way-like galaxies looking for the brightest streams. They report that no streams have been detected in images with a surface brightness limit brighter than 25 mag arcsec^-2. Their stream detection frequency increases significantly between 28 and 29 mag arcsec^-2. They
find a correlation between infall time and infall mass of the
stream progenitors, such that more massive progenitors tend to be accreted at later times.
<cit.> report on a theoretical investigation of the extended diffuse light around galaxies and galaxy groups by visually inspecting mock-images produced using the NEWHORIZON cosmological simulations. This is carried out on a sample of 37 simulated objects at redshifts z = 0.2, 0.4, 0.6 and 0.8, spanning a stellar mass range of 10^9.5 < M_⋆ < 10^11.5M_⊙.
Through production of surface brightness maps at different surface brightness limits, they predict the fraction of tidal features that can be expected to be detected at different limiting surface brightnesses.
<cit.>
identified and classified
tidal features in LSST-like mock-images from
four sets of hydrodynamical cosmological simulations:
EAGLE, IllustrisTNG and Magneticum).
These features
comprise streams/tails, shells, plumes or asymmetric stellar halos and double nuclei, and as such do not distinguish between minor and major mergers as the origin of the such features. The results of this previous work and the one presented in the preceding paragraphs will be discussed in more detail in Section <ref>.
As in this work, the works by <cit.> and <cit.> rely on visual inspection of mock-images from cosmological simulations. However
they
focus on the detection of tidal features and tidal tails, while we focus our analysis on the detection and characterisation of remnants of minor mergers, low surface brightness features that are of an accreted origin.
One important conclusion of <cit.>, with which we concur, is that a higher level of domain knowledge is required to perform robust visual classifications of tidal features (more so than to separate spiral and elliptical galaxies, for example). This work follows
our previous surveys
to detect stellar streams in
images from the DESI Legacy Surveys <cit.>, thus, the inspection of mock images in this paper
benefits from
our experience gathered
from working with
comparable
observational images.
In this work we use the term stellar tidal streams to refer to the remnants of minor mergers, in line with the nomenclature used in <cit.>[great circles, hereinafter referred to as circles, are streams that result from satellites along mildly eccentric orbits, with an arc-like shape, sometimes featuring complete loops around the host, but in most cases (in our sample) seen as covering only a small part of a loop; umbrellas, structures often appearing on both sides of the host galaxy, displaying an elongated shaft ending in the form of a shell (sometimes only the shells are visible) resulting from satellites that were on more eccentric, radial orbits; giant plumes, hereinafter referred to as plumes, structures appearing to shoot out of the host, generally for quite a long distance].
Our work focuses on stellar tidal streams in the Local Universe up to a distance of 100 Mpc (redshift z < 0.02).
We consider as stellar tidal streams only those low surface
brightness (LSB) features that are of an accreted origin, whatever their apparent morphology (shells, circles, plumes etc.); as we will discuss later in the paper, the apparent morphology is strongly dependent on the line of sight of the observation. We can broadly characterise stellar tidal streams as LSB structures in the halo of galaxies, at distances between ∼ 20 and 120 kpc from the host centre and with surface brightness fainter than ∼ 25 mag arcsec^-2.
Stellar tidal streams are thus a particular case of LSB structures and are significantly (several mag arcsec^-2) fainter than tidal tails,
another type of LSB feature
resulting primarily from major mergers <cit.>.
The results from observations of the Dark energy Survey (DES) presented in <cit.> allow for a direct, quantitative comparison of the abundance and characteristics of stellar tidal streams in the Local Universe with the predictions from state-of-the-art cosmological simulations based on the ΛCDM paradigm.
In particular, we can compare statistics derived from the observed stream population (for example, the number of stream detections at at given surface brightness limit, or the distribution of photometric observables for detected streams)
with those
predicted by
cosmological simulations.
To do this,
we obtain predictions of stream formation from three cosmological simulations: Copernicus Complexio <cit.>, Illustris TNG50 <cit.> and Auriga <cit.>.
We have carried out this work in the
context
of the Stellar Stream Legacy Survey <cit.>, whose main objective is to perform a systematic survey of stellar tidal streams in a parent galaxy sample of ∼ 3200 nearby galaxies using images from the recently completed DESI Legacy Survey imaging surveys. Examples of stellar streams detected in the SSLS can be seen in Figure <ref>.
A catalog of streams from the first batch of galaxies in this survey is presented in <cit.>.
In this paper, we
compare the stellar streams in a sample of galaxies observed by the DES survey
with
streams in mock images
derived from the simulations listed above, for a matched sample of hosts.
We compare the detection frequency and photometric characteristics measured in both samples
and discuss the results.
In Section <ref> we introduce the main characteristics of the
cosmological simulations. The selection of the halos from the simulations to be analysed is presented in Section <ref>. Section <ref> is devoted to the process of generating mock images. In Section <ref> we present the predicted detectability of streams at different surface brightness limits.
The results of the comparison are discussed in Section <ref> and the summary, conclusions and outlook are given in Section <ref>.
§ COSMOLOGICAL SIMULATIONS
This work makes use of several different cosmological simulations. An overview of the available cosmological simulations, as well as the underlying tools and modelling paradigms, can be found in <cit.> and as part of the AGORA collaboration <cit.>. For our work, we have selected three simulation sets, each belonging to one of the broad categories in which the cosmological simulations are classified at the highest level:
* Volume simulations produce large, statistically complete samples of galaxies
but typically do not resolve spatial scales smaller than ∼100 pc. Physical processes on scales smaller than the explicit hydrodynamical scheme, such as star formation and feedback, are incorporated via semi-analytical `sub-grid' models.
* Zoom-in simulations produce smaller samples of galaxies with a higher spatial and mass resolution and thereby model baryonic processes on smaller scales.
* Semi-analytical simulations are the result of a combination of numerical dark matter-only simulations, and analytic models for the prescription of baryonic physics. They are computationally much more efficient than the above categories, at the cost of self-consistency in the dynamics of the baryonic component.
For our comparison with the observational data we have chosen one state-of-the-art simulation suite of each of the types listed above.
§.§ Copernicus Complexio
The Copernicus Complexio <cit.> is a ΛCDM cosmological N-body simulation post-processed with a semi-analytic galaxy formation model and the `Stellar Tags in N-Body Galaxy Simulations' (STINGS) particle tagging technique <cit.>.
COCO provides both high mass resolution
and an approximate analogue of the Local Volume (a high-resolution spherical region of radius ∼25 Mpc, with density slightly lower than the cosmic mean, embedded in a lower-resolution box of 100 Mpc/side).
The Galform semi-analytic model of <cit.> was used to predict the evolution of the baryonic component in each dark matter halo. This model is calibrated to a range of low and high-redshift observables, including optical and near-IR luminosity functions, the HI mass function, and the relationship between the masses of bulges mass and central supermassive black holes. The 6-dimensional phase space of each single-age stellar population formed in the model is mapped to an individual subset of dark matter particles using the STINGS technique. These models will be presented in detail in a future publication (Cooper et al. in prep.). The specific characteristics of the COCO simulations are listed in <ref>.
§.§ Illustris TNG50
IllustrisTNG is a suite of large volume, cosmological, magnetohydrodynamical simulations run with the moving-mesh code AREPO (Springel 2010). The Illustris TNG50 <cit.> simulations characteristics are listed in Table <ref>:
The IllustrisTNG50 simulation includes a comprehensive model for galaxy formation physics, which is able to realistically follow the formation and evolution of galaxies across cosmic time <cit.>. Each IllustrisTNG50 simulation self-consistently solves the coupled evolution of dark matter, cosmic gas, luminous stars, and supermassive blackholes from a starting redshift of z=127 to the present day, z=0. We select the TNG50-1 simulation run and the snapshots at z < 0.02.
§.§ Auriga
The Auriga simulations <cit.> are a set of cosmological zoom-in magnetohydrodynamical simulations
also carried out with the AREPO code.
Auriga
re-simulates at higher resolution a sample of halos selected by the
EAGLE project <cit.>.
From the available simulations in the Auriga portal[<https://wwwmpa.mpa-garching.mpg.de/auriga/data.html>] we
use all 30 halos in the Original/4 series.
§ HALO SELECTION
In order to allow for a consistent comparison between the cosmological simulations and the DES image sample, we selected samples of simulated halos in a comparable range of both stellar mass and halo mass.
To determine the range of host stellar mass in the DES sample, we
construct an empirical relation between the absolute magnitude of the
galaxies in the B-band, M_B, available from the HyperLeda database[<http://leda.univ-lyon1.fr/>] <cit.>, and
their stellar mass, M_⋆, as determined by
the Spitzer Survey of Stellar Structure in Galaxies <cit.>.
We find the following relation:
log_10 (M_* / M_⊙) = - 0.396 M_B + 2.21
As shown in Figure <ref>, this empirical relation is linear over the magnitude range of the DES host sample, with small scatter (generally less than 0.2 log_10 (M_* / M_⊙) bar a few outliers).
The histogram in Figure <ref> shows the stellar mass distribution of the DES sample and of the selected halos in the COCO, IllustrisTNG50 and Auriga simulations.
The average stellar mass of the DES sample is log_10 M_⋆/M_⊙ = 10.00 and the standard deviation σ = 0.38.
While the average of the distributions is not the same, there is sufficient overlap between the DES sample stellar mass range and the stellar mass range of the simulated halos selected for the comparison.
Figure <ref> shows the halo mass distribution of the selected halos in the COCO, IllustrisTNG50 and Auriga simulations. Here all the simulations overlap and the average values are close to one another.
In this stellar mass range, COCO galaxies occupy a broader range of halo masses than IllustrisTNG50 galaxies.
§ MOCK IMAGES
To compare the predictions of cosmological simulations with the observations from the DES sample,
we have generated mock images from
snapshots of the simulations described in Section <ref> in the redshift range 0 < z < 0.02.
The continuous stellar mass density field in all these simulations is represented by discrete tracers, called star particles[In the COCO simulation, stellar mass is associated with a subset of the collisionless particles in post-processing, rather than an independent particle species in the original simulation, but the principle is the same from the point of view of our analysis; more details are given in <cit.>.]. The stellar mass associated with each star particle corresponds to a stellar population with a single age and metallicity. As in any N-body realization of a density field, each particle notionally corresponds to an irregular volume of phase space centered on the location of the particle.
We have applied
the following
transformations
to the properties of the star particles from the simulation snapshots in order to recover the observables that can be identified in real images:
* Expansion of the discrete star particles into an approximation of the implied continuous 3-dimensional of stellar mass distribution, by convolution with an adaptive smoothing kernel;
* Projection of the continuous distribution of stellar mass into a 2-dimensional plane. The orientation of the central galaxy relative to the observer's line of sight is a parameter of our method, and can be either random or specific (for example, to view the galaxy face-on or edge-on);
* Conversion of stellar mass density to luminosity density, by convolution of an SED appropriate to the age and metallicity of each particle with a specific photometric bandpass.
We use
the open source tool pNbody [<https://obswww.unige.ch/ revaz/pNbody/>] <cit.> to produce mock images by implementing the transformations above.
Figure <ref> shows, as an example, the r-band surface brightness map produced by processing one of the COCO galaxies with pNbody.
The contour lines identify the isophotes for an intuitive view of the possible low surface brightness (LSB) structures present in the image.
We have carried out tests to asses the
robustness
of the resulting mock-images
to variations in
the pNbody configuration parameters for the generation of
mock images with the
DECam instrument.
We have generated surface brightness maps for a halo at a distance of 70 Mpc from the Sun, as a representative distance for the DES galaxy sample that ranges from 40 to 100 Mpc (changing the distance within the DES galaxy sample range does not noticeably influence the detectability of streams, as explained in Section <ref>).
For the expansion (smoothing) step, we distribute the luminosity (flux) of each particle over the image pixels by convolution with a Gaussian kernel. The scale of this kernel, h, which we refer to as the smoothing scale, is set to a parameterized multiple to the root-mean-square average distance to the 16^th-nearest star particle neighbour, h_16. It therefore adapts to the local star particle density. The logic for setting h is broadly similar to that used to determine the kernel scale in a smoothed particle hydrodynamics calculation. However, the smoothing in our case only serves to interpolate between the original particles and does not have any physical significance. It is therefore somewhat arbitrary; our method reflects a balance between smoothing sufficiently to reduce the visual impression of a discrete particle distribution, while preserving the small-scale features.
Figure <ref> shows
mock images of the same three galaxies (left to right) with kernels of scale h=0.6 h_16 (top row) and h=0.3 h_16 (bottom row). Over this range (with the DESI pixel scale) the bulk of the particle granularity in the image is removed, but nature, extent and surface brightness of tidal features relevant to our subsequent visual inspection (see Section <ref>) are not noticeably different.
We use the mock images to assess the detectability of streams over a range of surface brightness limits.
To follow the same process of stellar stream detection by visual inspection as with real images, we have transformed the surface brightness maps into counts images. Then we have added background noise to the images, which depends on the type of analysis to be carried out.
In order to assess the detectability of the streams under different image depths, a simple realization of background noise was added to the images, as flat Gaussian noise with variable amplitude. This gives us the flexibility to emulate different surface brightness limits in an easy way; we choose limits between 25 and 34 mag arcsec^-2 in intervals of 1 mag arcsec^-2.
For this purpose we used the state-of-the-art GNU Astronomy Utilities (Gnuastro)[<http://www.gnu.org/software/gnuastro>] software.
To asses the impact of the smoothing length on the detection of a possible stream
in the mock images by visual inspection,
we have examined count images with different choices of smoothing length, after adding
background noise corresponding to a surface brightness limit of 29 mag arcsec^-2.
Figure <ref> shows
this test for smoothing lengths h=0.6 h_16, h=0.3 h_16 and an alternative smoothing scheme in which h is instead set to the absolute distance to the 5^th nearest neighbour.
Again, regarding stream detection by visual inspection, we find no significant difference between the images.
We therefore adopt a smoothing scale of h=0.6 h_16.
To measure
photometric properties of the mock images
and
compare the results with those of the DES sample, we add a more realistic sky background to the mock images. We first extract the sky background from selected real DES sample images,
removing the central galaxy and replacing it by real sky background from an area of the same image without significant point sources.
The image from which we extracted this fiducial background image was
selected
according to
the following criteria: i) the central galaxy should be edge-on, in order to minimise its area of influence on the image; ii)
the image
should have no stream detection, in order not to interfere with the synthetic halo image; and iii) the image should have an r-band surface brightness limit representative of the DES sample.
The selected image had a surface brightness limit of 28.65 [mag arcsec^-2].
The fiducial sky background extracted from this image, as above, was
then
superimposed onto
each mock image
with the central
galaxy
and no sky background, creating an image with a synthetic central galaxy and a real DES background.
We use Gnuastro to mask the central galaxy with an ellipsoidal aperture, and place instead an ellipsoidal 'patch' of the same dimensions extracted from a region of the image without bright sources. The size and orientation of the ellipsoidal mask is such that covers the area of influence of the central galaxy in its surroundings, i.e. it reaches up to the point where the surface brightness profile flattens.
Figure <ref> shows an example of a COCO
galaxy
to which we have added the sky background of a real DES image with a surface brightness limit of 28.65 mag arcsec^-2. We compare this to
the same mock image combined with a simple Gaussian noise background equivalent to the same surface brightness limit.
A stream with a umbrella /shell morphology can be clearly appreciated in the image with the real DES background and its appearance is consistent with that in the image with an artificial background. Adding a synthetic background to the mock image does not seem to impact significantly the detection of streams by means of visual inspection with respect to the real background in this surface brightness limit regime. This method is very efficient to emulate different levels of surface brightness limit and we adopt it in this work to assess the detectability to those levels currently not achievable in available surveys. However we acknowledge that for much fainter surface brightness limits, this method provides only an approximation as the confusion of sources and possibly cirri will become much more significant and the difference between the synthetic background and the real background will increase.
§ STREAM DETECTION
In this section, the detectability of tidal streams under different image depths is analysed on the basis of mock-images. At the present time, there is no automatic method for detecting tidal streams, though this is an important subject of current research.
Therefore detectability is based here on the method combining visual inspection and image analysis tools, as presented in Section <ref>. This is the method applied to the detection of streams in real images, as presented in <cit.> and that has been the basis for generating the stream catalogue presented there.
We made all the photometry measurements by applying Gnuastro's MakeCatalog subroutine <cit.> on the sky-subtracted images generated by Gnuastro's NoiseChisel <cit.>. The method is identical to the one applied to the analysis of the DES sample images and explained in detail in <cit.>.
The analysis has been done on the basis of the r-band images. This band has been chosen as it
has been used in other relevant observational studies on tidal features <cit.>, thus allowing for comparison of results across different studies. In the mock-images, the SDSS r-filter produces brighter measurements of the stream than the SDSS g-filter, in agreement with the observations. Depending on the region in the image this difference can be up to ∼ 0.6 mag arcsec^-2.
In order to assess the impact of the host distance on the detectability of streams, mock images have been produced at different distances (50, 70 and 90 Mpc), covering the distance range of the observed hosts with their streams,
which ranges
from 40 to 100 Mpc. Although the surface brightness is independent of the distance, the distance has an impact on the S/N at pixel level that may in turn impact the observability. However, as expected, this small relative difference in the distance does not show a noticeable difference in the visual perception of the images.
§.§ Stellar Streams in the COCO Simulation
For the comparison between the DES sample and the COCO simulations, in the first step we selected a sample of simulated halos containing central galaxies with
stellar mass
in a range around
the DES stellar sample average ± 1σ, that is, between 4.17 × 10^9 M_⊙
(log_10 M_⋆/M_⊙ = 9.62) and 2.4 × 10^10 M_⊙
(log_10 M_⋆/M_⊙ = 10.38).
In a second step, we selected a sample of simulated
halos containing central galaxies with
stellar mass
corresponding to the DES host sample average stellar mass value +1σ and +2σ (log_10 M_⋆/M_⊙ = 10.76).
For the COCO simulations, 108 halos have been selected with central galaxy stellar masses overlapping with 70% of the DES sample with streams.
A comparison of the host stellar mass distribution of the DES sample with the COCO simulation sample, within the selected stellar mass range,
is shown in Figure <ref>
We have generated mock images with surface brightness limit between 25 and 34 mag arcsec^-2 in intervals of 1 mag arcsec^-2. As the depth of the image increases, that is, the fainter the surface brightness limit is, the clearer the streams appears to the observer, as can be seen in Figure <ref>.
Then the resulting ∼ 1000 images were visually inspected and for each one, the surface brightness limit at which a streams could be visually detected for the first time was identified and noted. The inspection was carried out on the image
FITS-format files, displayed with the SAO DS9 tool. As for the inspection of the observational images, suitable colour, scale and analysis block level options were selected to improve the perception.
The DES observational images have a ‘real’ sky background while for the mock-images we have a sky background that while corresponding to a similar surface brightness limit, is much smoother, allowing for an easier detection of the streams. This has been taken into account in assessing the detectability of streams by reporting one level deeper when the stream was not clearly distinguishable by visual inspection.
The result is a curve indicating the percentage of streams detected within the COCO halo sample as a function of the image depth, that is, as a function of the image surface brightness limit. The result is depicted in Figure <ref>. The curve follows
an approximately
linear trend between
surface brightness limits of
26 and
34 mag arcsec^-2, with a
12.5%
increase in streams detected per 1 mag arcsec^-2 increase in surface brightness limit. In 97% of the mock-images, we detect by visual inspection what we consider to be (part of) a stream up to a SB-limit
of 34 mag arcsec^-2.
However,
we estimate that
that only 89% of such faint structures could actually be measured with a reasonable level of error in real images.
Regarding the stream morphology,
in the COCO
sample, at SB-limit 28-29 arcsec^-2, around 70-80% of the detected streams are shells (a segment of the wider morphology class known as umbrella, see stream morphology classification in <cit.> while at SB-limit 34 arcsec^-2 around 80-90% of the detected streams are shells, with only 2-3% of the streams displaying a circular morphology.
We have investigated a possible correlation between the surface brightness limit at which streams are detected and the host galaxy stellar mass. In particular, we have analysed whether more massive host galaxies show streams at a lower (brighter) surface brightness limit. Figure <ref> shows the stellar mass distribution for the galaxies identified as hosting streams versus the surface brightness limit at which those streams have been detected.
No correlation is
evident.
§.§ Stellar Streams in the IllustrisTNG50 Simulation
We have generated surface brightness maps from the IllustrisTNG50 simulation for 60 halos.
40 of these
have central galaxies in the
stellar mass range between
between 3.02 × 10^10 M_⊙
(log_10 M_⋆/M_⊙ = 10.48) and 5.73 × 10^10 M_⊙
(log_10 M_⋆/M_⊙ = 10.76),
corresponding to the range of
stellar mass between the average value of the DES sample +1σ and +2σ.
In order
to compare with Milky Way-like galaxies (such as
those in the Auriga simulations) we have selected 20 additional halos in an extension of the stellar mass range to 8.0 × 10^10 M_⊙ ( log_10 M_⋆ / M_⊙ = 10.9), see Section <ref> for details.
We have transformed the surface brightness maps into images with counts and added sky background corresponding to 10 levels of surface brightness limit between 25 and 34 mag arcsec^-2 (
see
Section <ref>). As we increase the depth of the image (apply a higher background noise level to the image corresponding to a fainter surface brightness limit), streams become more visible and can be detected by visual inspection, as can be seen in Figure <ref>.
We have visually inspected the resulting ∼ 600 images and for each halo / host galaxy, the surface brightness limit at which a streams could first be visually appreciated was identified and noted.
As with our analysis of COCO, FITS images were inspected with DS9.
Figure <ref> shows the resulting curve indicating the percentage of streams detected within the the IllustrisTNG50 halo sample as a function of the image surface brightness limit.
The curve shows a steep gradient between SB-limit = 30 and SB-limit=32 mag / arcsec^2, of about 25 % increase in streams detected per 1 mag arcsec^-2 increase in surface brightness limit.
In ∼ 70% of the mock-images a (part of a) stream can be detected by visual inspection at a SB-limit of 34 mag / arcsec^2.
The detection percentage level resulting from inspection of the IllustrisTNG50 mock-images at a SB-limit of 28.65 mag arcsec^-2 (the SB limit value corresponding to the average SB-limit for the DES sample in the r band) is between 3% and 10% (corresponding to the SB-limit values for 28 and 29 mag arcsec^-2 in the curve, respectively).
To assess the dependency of the detection rate for a certain SB-limit on the host stellar mass, we have compared the detectability curve obtained for 30 halos in the stellar mass range between log_10 M_⋆/M_⊙ = 10.48 and log_10 M_⋆/M_⊙ = 10.76 with the one obtained for 60 halos in an extended stellar mass range up to log_10 M_⋆/M_⊙ = 10.9. As can be seen in Figure <ref>, both curves are very similar, the one for the extended mass range appearing smoother, due to the increased population of galaxies included. This seems to reinforce the speculation made in Section <ref> using COCO simulations that the stellar mass of the host galaxy does not noticeably influence the brightness of the streams around it.
We have taken into account the fact that the smooth background added to the image makes
streams easier to detect, by reporting one level fainter of the surface brightness limit, when the stream cannot be clearly distinguished by visual inspection at a certain level.
The morphology analysis shows 20-43% shells (part of the cosmological class umbrella/shell) and 10-16% circular shapes, some showing a clear loop around the host.
§.§ Stellar Streams in the Auriga Simulation
We have available 30 Auriga zoom simulations of Milky Way-mass halos
at z < 0.02.
Since
this sample
may not be statistically representative, and in order to compare with the COCO and IllustrisTNG50 simulations, three surface brightness maps have been generated from each halo, taking the three
orthogonal axes of the simulation coordinate system as lines of sight. None of the axes are aligned with face-on or edge-on directions so that overall the orientation of the galaxies is random. This process results in
90 images,
each of which we combine with a range of sky backgrounds
between the SB limit of 25 and 34 mag arcsec^-2, at intervals of 1 mag arcsec^-2, thus yielding 900 mock images in total. Examples of halos with streams of different morphology can be seen in Figure <ref>.
Figure <ref> shows the detection rate curve obtained for Auriga from visual inspection of the mock images, together with those obtained for COCO and IllustrisTNG50, and the observational results of the DES sample. The figure shows the percentage of the sample for which at least one stream is detected for each of the 10 surface brightness limit levels analysed. The curve indicates detections of streams with a reasonable level of confidence. When we detect a LSB feature but have doubts
about
whether that LSB feature constitutes a stream, we take
the conservative approach of not counting it towards the results included in the figure
Two of the halos (Au20 and Au30) seem to show an ongoing major merger between two galaxies of similar apparent size, clearly visible at a surface brightness limit of 27 mag arcsec^-2.
These two halos have been discarded,
because
we are looking for remnants of minor mergers only. LSB structures that could appear similar to streams at fainter surface brightness limits are not accounted for in the detectability curve.
The curve shows that there are no stream detections
at a SB limit 26 mag arcsec^-2 or brighter. Between 26 and 29 mag arcsec^-2 there is an increase in detections with a gradient of around 4% detection rate per
mag arcsec^-2 the SB limit, reaching ∼ 12% detection rate at 29 mag arcsec^-2. Between 29 and 31 mag arcsec^-2 the gradient is about 4 times steeper, reaching ∼ 34% detection rate at 31 mag arcsec^-2. The gradient increases again to ∼ 25% per mag arcsec^-2 increase in the SB limit between 31 and 32 mag arcsec^-2 reaching ∼ 65% detection rate at this SB limit. Beyond this value, the gradient flattens.
We detect streams by visual inspection with a reasonable level of confidence in 95% of the mock images up to a SB-limit of 34 mag arcsec^-2.
The percentage of streams detected in the AURIGA sample at a SB-limit corresponding to the average SB-limit for the DES sample in the r band (28.65 mag arcsec^-2) is between 8% and 13% (corresponding to the SB-limit values for 28 and 29 mag arcsec^-2 in the curve, respectively).
The morphology analysis yields 41-62% shells and 11-24% circular shapes. However, the observed stream morphology is strongly dependent on the line-of-sight from which the streams is observed (Youdong et al. in prep.). As an example, Figure <ref> shows halos Au14 and Au29 seen from two different lines of sight. As can be appreciated in the images, looking at the same halo from these two different perspectives would suggest a different morphological classification.
§ STREAM PHOTOMETRY
The results are presented in this Section and compared with the results obtained from the photometric characterisation and analysis of the streams detected in the DES observations sample (see <cit.>). This comparison between the observations and simulations includes also the results of the statistical analysis.
The mock images generated from cosmological simulations though the process described in Section <ref> have been subject to photometric characterisation and analysis. In order to be able to measure photometry parameters on these mock images, and compare with observations, the corresponding surface brightness maps have been generated for the SDSS r, SDSS g and SDSS z filters.
The line of sight in the z coordinate direction has been chosen, as the halos are not aligned with the volume coordinates, the chosen line of sight represents a random orientation of the galaxy and therefore it does not introduce an orientation bias of the system. For the photometry analysis, the same smoothing method and length have been selected as for the stream detectability analysis in Section <ref>. We have generated mock images from the surface brightness maps by transforming surface brightness readings to count readings, applying a zero point ZP=22.5 mag as per DECam images.
The comparison between the mock-images photometry and the observations from the DES galaxy sample is carried out on the basis of the average stream surface brightness, average stream (g-r)_0 colour as well as the average distance of stream to the host centre. The resulting histograms are shown in Figures <ref>, <ref> and <ref>.
to be completed for TNG50 and Auriga
§ DISCUSSION
We now compare the predictions of the cosmological simulations with one another and with the observational data regarding frequency, detectability and morphology
of streams.
Looking at Figure <ref>, the cosmological simulations all seem to predict that, for a surface brightness limit of 32 mag arcsec^-2,
almost 70% of galaxies in the mass range we study have one or more detectable streams.
However, the simulations show discrepancies with one another regarding
detection rates
for
surface brightness
limits brighter than
32 mag arcsec^-2.
In particular, the
two hydrodynamical simulations,
IllustrisTNG50 and Auriga,
agree well with each other but predict a lower detectability rate than COCO.
For limits fainter than
32 mag arcsec^-2, however, IllustrisTNG50
has a lower detection rate than COCO and Auriga (which agree well with each other, in this regime).
The simulations also show discrepancies with one another regarding the morphology of the streams detected, as will be discussed further down in this section.
The small differences in the stellar mass ranges
of the samples of galaxies drawn from the three simulations
(see Figure <ref>)
do not seem to play an important role in stream detectability for the range of stellar masses analysed in this work, as discussed in Section <ref>. We therefore speculate that these differences
can be attributed mainly to the treatment of baryon physics in
the simulations (all the simulations use the same N-body treatment of gravitational dynamics and very similar cosmological parameters).
Pinpointing specific explanations for these differences
will require an in-depth analysis, outside of the scope of this work.
However, since the most striking difference concerns the apparently greater number of relatively brighter streams detectable in the COCO particle tagging models, compared to the two hydrodynamical simulations, we speculate on why differences in the dynamical treatment of the baryons may give rise to this result.
As discussed in <cit.>, when comparing particle tagging simulations with hydrodynamical methods, even with identical initial conditions, it can be difficult to separate effects due to the dynamical approximation of particle tagging from the effects of different star formation models. In our case, the IllustrisTNG and COCO models have both been calibrated to observations of the galaxy mass and luminosity functions at z=0 <cit.>, as well as other low-redshift data, notably the galaxy size–mass relation. The fundamental relationship between stellar mass and virial mass in both simulations agrees well with (for example) that inferred from galaxy abundance matching. Although the typical star formation histories of host galaxies in our sample, and their stream progenitors, may still differ in detail between the two simulations, we expect that they are broadly similar. This makes it more likely (although by no means certain) that the differences we observe are related to dynamical factors, rather than differences in how stars populate dark matter halos.
Two particularly relevant dynamical factors are neglected (by construction) in particle tagging models. First, the gravitational potentials of stream progenitors can be altered by the inflow and outflow of baryons associated with cooling and feedback; this may make satellites either more or less resilient to tidal stripping, depending on whether baryonic processes produce density cores or density cusps. Second, stars (and gas) could make a significant contribution to the central potential of the host halo, in particular through the formation of a massive baryonic disk.
<cit.> explore the consequences of neglecting these factors when predicting satellite disruption (and hence stream and stellar halo formation) in particle tagging models. In the hydrodynamical model used in that work, massive satellites were found to disrupt somewhat earlier than realizations of the same systems in a particle tagging model[This of course depends on the detail of the hydrodynamical scheme and its subgrid recipes for star formation and feedback; the specific scheme used in IllustrisTNG was not examined by <cit.>. A detailed case study of a single bright stream relevant to this discussion is given in Appendix A of that paper.].
This could explain why more streams are visible at surface brightness limits of ≲ 28 mag arcsec^-2
in COCO compared to IllustrisTNG; if satellites are disrupted earlier in IllustrisTNG, their streams may have more time to phase mix, lowering their surface brightness.
A similar argument could also explain the greater abundance of shells detected in COCO. Shells originate from satellites on radial orbits <cit.>.
The relative fraction of radial and circular orbits may differ between IllustrisTNG and COCO, because the evolution of the progenitor's orbit during pericentric passages (due to exchange of angular momentum with the host) may be significantly different with and without a massive baryonic disk.
A related effect is explored by <cit.>,
who study the relation between the rotation of the hosts and the presence and morphology of tidal features
One of the conclusions of that work is that shells appear more frequently in slow rotating hosts than in fast rotating hosts. This hypothesis could be tested in future work. More generally, further exploration of these differences particle tagging and hydrodynamical simulations could help to understand how observations of streams can constrain the nature of the gravitational potentials of stream progenitors and their host galaxies.
Turning now to comparison between the simulations and the observational data, 8.7±1.1% of galaxies in the DES sample have detectable streams at an average r-band surface brightness limit of 28.65 mag arcsec^-2.
of 28.65 mag arcsec^-2 is 8.7 ± 1.1 %.
This seems to match very well with the predictions of IllustrisTNG50 and Auriga,
and is lower than the prediction from COCO.
Stream morphology is not an observable that can be used reliably to constrain simulations, because it is strongly dependent on the line of sight along
which the stream is observed. Nevertheless, comparison of the
fraction of streams with different morphologies is interesting because it could be related to the
same dynamical differences between simulations methods that give rise to differences in stream abundance.
20-43% of streams in the IllustrisTNG50 sample are shells, which is within the range found in the DES sample, and 10-16% have circular morphology, which is also not too far from the observations.
As noted above, there is a significant discrepancy in the relative fractions of different stream morphologies
between observations and the COCO simulations. In the COCO mock images, 70-90% of streams are identified as having Umbrella/Shell morphology, while in the DES sample,
only 27-38% of streams are identified as such. The COCO mock images show only 2-3 % of streams with circular morphologies, compared to 21-35 % in the DES images.
Auriga predicts 41-62% of streams with Umbrella/Shell
morphologies, significantly above the fraction in the DES sample, and 10-24% of stream with circular morphology, somewhat below the DES figures.
Table <ref> summarises the stream morphology findings for the different samples derived from the total number of streams detected. Note that more than one stream is detected in some of the halos.
It is beyond the scope of this work to analyse the reasons for these discrepancies in
stream morphology; as noted above, further work would be a valuable contribution to understanding the relationship between the observable properties of streams and the galaxy formation process as a whole.
Regarding the stream photometry measurements, the stream average surface brightness range in the COCO simulation matches generally well the observations in the DES stream sample, though the distribution is skewed towards the brighter end of the range (Figure <ref>). This is consistent with the abundance of shell-shaped streams, typically brighter than other stream morphologies, as confirmed by the observations.
The stream (g-r)_0 colour distribution of the COCO simulations also matches generally well with the observations. The mean value of the COCO simulation distribution is 0.54±0.12 mag , versus 0.57±0.14 mag for the DES observations sample. However, the simulations are slightly skewed towards the blue end as can be seen in Figure <ref>.
Consistent with the overabundance of shell stream morphology in the COCO simulations, the average distance of the stream to the host galaxy matches the observations, if the comparison is done with the shell-shape streams in the DES observation sample, see Figure <ref>.
photometry analysis to be completed for TNG50 and Auriga
§.§ Comparison with Previous Work
We present the comparison with other observations of streams as far as reported in the literature:
* <cit.> reports a frequency of 6-19% at a SB-limit of ∼ 27 mag arcsec^-2. We consider the lowest value of 6% as the most reliable for maximising purity in the detection of streams (see <ref>.
* <cit.> reports a LSB feature (a superset of tidal streams) detection frequency of 12-18% at an average SB-limit of ∼ 27.7 mag arcsec^-2 for the g-band. Note that the g-band in the DES sample is fainter in average than the r-band by ∼ 0.5 mag arcsec^-2. It is also to be noted that the calculation of the SB limit in that paper is based on 1σ of the noise variation in apertures of 1.2 arcsec^2 placed on empty regions of the images and therefore differs from the one applied in this work [3σ, 100 arcsec^2] that follows the standard proposed in <cit.>.
* <cit.> reports a frequency of ∼ 10% at a SB limit of 28 mag arcsec^-2.
* <cit.> reports 15% of tidal features, including 5±2% for streams and 5±2% shells (which we consider together in our curve) at a SB limit of 28.5 - 29 mag arcsec^-2 for the g-band. Note that the g-band in the DES sample is fainter in average as the r-band by ∼ 0.5 mag arcsec^-2.
* <cit.> have characterised the morphology of more than 350 low surface brightness structures up to a distance of 42 Mpc through annotation of images from the Canada–France Imaging Survey (CFIS2) and the Mass Assembly of early-Type gaLAxies with their fine Structures survey (MATLAS3). They obtained 84 annotations for streams and 260 for shells, but out of these figures alone it is not possible to derive detectability figures for the brightest streams and shells, as there could be several of them in one image.
* <cit.> reports a frequency of 12.2 ± 2.4% for Milky Way-like galaxies at a SB limit of 28.40 mag arcsec^-2.
* <cit.> use deep imaging from the Subaru-Hyper Suprime Cam Wide data to search for tidal features in massive [log_10 M_⋆ / M_⊙ > 10] early-type galaxies (ETGs) in the SAMI Galaxy Survey. They report a tidal feature detection rate of 31 ±2 % at a surface brightness limit of 27 ±0.5 mag arcsec^-2 for the r-band. They calculate the surface brightness limit following the same approach as <cit.>, thus different to our method [3σ, 100 arcsec^2]. For comparison, our method applied to a test Subaru HSC image yields a surface brightness limit of 29.79 mag arcsec^-2.
* In <cit.> the results of visual inspection of a sample of 838 edge-on galaxies using images from three surveys: SDSS Strip-82, Subaru HSC and DESI (DECals, MzLS,BASS) are presented. In total, 49 tidal features out of 838 images are reported, equivalent to a frequency of 5.8% at a SB limit of 28.60 mag arcsec^-2. In that study, tidal features include also disc deformations and tidal tails, typical of major mergers.
Also a number of papers in the literature report the frequency of tidal features (including streams and shells) in cosmological simulations:
* <cit.> reports on detection of tidal features inspecting mock-images produced using the NEWHORIZON cosmological simulations. Through production of surface brightness maps at different surface brightness limits, they predict the fraction of tidal features that can be expected to be detected at different limiting surface brightnesses.
In this study, tidal features comprise: (i) Stellar streams, (ii) Tidal tails, (iii)
Asymmetric stellar halos, (iv) Shells (v) Tidal bridges, (vi) Merger remnants (vii)
Double nuclei, of which only (i) and (iv) are clearly of accreted nature.
For a surface brightness limit of 35 mag arcsec^-2), expert classifiers were able to identify specific tidal features in close to 100 per cent of galaxies (M_⋆ > 10^9.5 M_⊙), in agreement with our results.
For the range of stellar mass of the DES sample, log_10 M_⋆ = 10, they report ∼ 25% detection rate for streams and shells together at a surface brightness limit of 30 mag arcsec^-2 in the r-band. The detection rate becomes 35% for a surface brightness limit of 31 mag arcsec^-2. These figures match very well our predictions with Auriga and IllustrisTNG50.
* In <cit.>, the results of identification and classification of tidal featurest in LSST-like mock-images from cosmological simulations are reported. Four sets of hydrodynamical cosmological simulations are used (NewHorizons, EAGLE, IllustrisTNG and Magneticum). The frequency of tidal features, expressed in fractions of the total number of images, is between 0.32 and 0.40 showing consistency across the different simulations. Tidal Features comprise streams/tails, shells, Plumes or Asymmetric Stellar Halos and Double Nuclei. Looking only at streams and shells, the percentage of detections varies between 5-15% depending on the level of confidence, for a SB limit of 30.3 mag arcsec^-2 in the r-band.
* In <cit.>, the authors report the results of inspecting surface brightness maps generated from 30 Auriga cosmological simulations <cit.> of Milky Way-like galaxies looking for the brightest streams. These simulations are the same we have applied in this work. They report that no streams have been detected in images with a surface brightness limit brighter than 25 mag arcsec^-2 and that the stream detection frequency increases significantly between 28 and 29 mag arcsec^-2. For a surface brightness limit of 28.50 mag arcsec^-2 in the r-band they achieve a detection rate of ∼ 18 - 30%. This is far higher than our results show with the same Auriga simulations and we believe the reason is the different pixel size of the mock-images. While we use the pixel size of the DECam instrument, 0.262 arcsec, equivalent to 0.08 kpc at 70 Mpc distance, in <cit.> they use a pixel size of 1 kpc. This is relevant for the visual inspection of count images, as it has an impact on the S/N per pixel and could account for a significant difference in the surface brightness limit at which the streams are detected.
* <cit.> use the Magneticum Box4 hydrodynamical cosmological simulations to detect tidal features (streams, shells and tidal tails) and connect their morphology to the internal kinematics of their host galaxies. In Table 1 they present the fraction of galaxies with the different types of tidal features for host galaxies with M_⋆≥ 10^11 M_⊙. Looking at their Figure 3, the fraction of shells and streams together is ∼ 10% for galaxies with 10^10 M_⊙ < M_⋆ < 10^11 M_⊙, comparable with the stellar mass range of our simulations, at a surface brightness limit of 28.5 - 29 mag arcsec^-2. This results are similar to the ones obtained in this work.
§.§ Caveats
The comparison of our stream observability results with those of other surveys and cosmological simulations is not always straightforward, due to: i) the range of halo mass and stellar mass is not always the same, although our analysis of stream observability does not reveal a significant correlation of the stream observation rate with the host stellar mass within the range of stellar masses considered in this work; ii) the method applied for the visual inspection of the images, some of the previous works seem to be based on the inspection of surface brightness maps with cut-offs of the surface brightness limit, while we inspect count images, where the pixel size bears an influence on the S/N per pixel and thereby on the detectability of streams; iii) the different ways of calculating the surface brightness limits of the images by the different authors, as pointed out in the previous Section and iv) the different classification schemes used for the LSB features and their meaning, e.g. streams, shells or tidal tails used by the different authors.
The observability result obtained with mock-images can only be seen as an indicative reference for predicting the stream frequency to be met by present and future surveys for a number of reasons. First, this result has been obtained with an idealised flat image background, while in real-life observations the background is often not sufficiently flat, preventing the detection to reach the theoretical surface brightness limit, as explained in <cit.> regarding DES images. As mentioned earlier, for surface brightness limits much fainter than those of the DES sample, the confusion of sources and possibly cirri will become more significant and the synthetic background will be less representative of the real one, making detection more difficult. Second, this prediction is dependent on the modelling assumptions underlying the cosmological simulations. Nevertheless, the fact that hydrodynamic simulations provide very similar results with one another and match also the results from observations of the DES sample, as well as previous surveys and simulation analysis, provides confidence in the predictions.
Regarding the method of visual inspection of images, it is clear that the human factor plays a role in the results of the detection. However, the confidence in the method can be increased by having a team with experience in searching specifically for streams in real observations, and a systematic and rigorous method to proceed. While different scientists may come to different conclusions in specific cases, overall, in a large survey, these differences may not have a significant influence in the global results. Furthermore the inspection of mock-images with a flat background (as seen in Figures) does not leave a large margin of interpretation regarding the presence of tidal features. It is in the classification of these features where the divergences are more likely to appear. In any case, as discussed above, the morphology is a weak observable because its perception is dependent
on
the line of sight. In the absence of a mature automatic detection method for extragalactic streams, visual inspection remains the state-of-the-art used by all the works in this domain reported in the literature.
§ CONCLUSIONS AND OUTLOOK
From the results obtained from comparing the stream frequency, characteristics
of the DES galaxy sample observations with
cosmological simulations, we can conclude that overall the methods applied here work well and provide a valid reference for the analysis of stellar streams.
Generally the predictions of the simulations are in agreement with the results of the
analysis
carried out on
the DES sample, following the same approach to visual inspection as the present work, used as a reference <cit.>, and with previous work reported in the literature, as presented in Section <ref>.
This provides a degree of confidence in the simulation predictions regarding detection of streams in future surveys at surface brightness limits for which we do not have observations today. The cosmological simulations we have have analysed here predict that, in the absence of a confusion limit due to background and foreground sources, and with a pixel size similar to the one of the DECam instrument, a frequency of almost 70% in the detection of streams around galaxies can be achieved for a surface brightness limit of 32 mag arcsec^-2. This prediction can be extrapolated to other observations taking into account the effect of the pixel size on the S/N.
Nevertheless, there are some noticeable differences in the stream morphologies between observations and simulations and between simulations themselves, that should be further analysed in order to understand their origin and
be able to evolve the specific simulation resolution and physics modelling required for stream analysis where needed.
We present a method for comparison of stream observations with cosmological simulations based on novel tools for generation of mock images.
When inspecting the mock-images, we follow exactly the same approach and apply the same criteria, with the same team, as we did for inspecting the DES observational image sample, and reported in <cit.>.
In this work, we only compare with the output of the simulations at z=0
Exploring the evolution of streams from high redshift to the current time in the simulations could provide more insight into the history, mass ratio and kinematics of the preceding mergers,
allowing to compare the simulated reality with the conclusions of the visual inspection.
This may also help to understand the differences in stream morphology that have been highlighted in this work.
Overall, the results of our work indicate that surveys reaching a surface brightness limit fainter than 31 mag arcsec^-2 would be able to reach a stellar tidal stream detection rate of at least 50%, and thereby test the predictions of the ΛCDM model as implemented by state-of-the-art cosmological simulations.
JMC wants thank the Leiden Observatory for hosting and providing computer infrastructure and facilities for carrying out part of this work, as well as the Universidad Complutense de Madrid for providing computer infrastructure used in this work.
JMC wants to thank Yves Revaz for support in the use of the pNbody tool.
DMD acknowledges the grant CNS2022-136017 funding by MICIU/AEI /10.13039/501100011033 and the European Union NextGenerationEU/PRTR and finantial support from the Severo Ochoa Grant CEX2021-001131-S funded by MCIN/AEI/10.13039/501100011033 and project (PDI2020-114581GB-C21/ AEI / 10.13039/501100011033).
JMC and MAGF acknowledge financial support from the Spanish Ministry of Science and Innovation through the project PID2022-138896NB-C55
APC acknowledges support from a Taiwan Ministry of Education Yushan Fellowship and Taiwan National Science and Technology Council grants 112-2112-M-007-017 and 113-2112-M-007-009.
SRF acknowledge support from the Knut and Alice Wallenberg Foundation, the Swedish Research Council (grant 2019-04659), and the Swedish National Space Agency (SNSA Dnr 2023-00164).
SRF also acknowledges financial support from the Spanish Ministry of Science and Innovation through the project PID2021-123417ob-i00.
For this work we have used GNU Astronomy Utilities (Gnuastro, ascl.net/1801.009) versions 0.17, 0.18 and 0.20. Work on Gnuastro has been funded by the Japanese MEXT scholarship and its Grant-in-Aid for Scientific Research (21244012, 24253003), the European Research Council (ERC) advanced grant 339659-MUSICOS, and from the Spanish Ministry of Economy and Competitiveness (MINECO) under grant number AYA2016-76219-P.
M.A acknowledges the financial support from the Spanish Ministry of Science and Innovation and the European Union - NextGenerationEU through the Recovery and Resilience Facility project ICTS-MRR-2021-03-CEFCA and the grant PID2021-124918NA-C43.
This work used high-performance computing facilities operated by the
Center for Informatics and Computation in Astronomy (CICA) at National
Tsing Hua University. This equipment was funded by the Ministry of
Education of Taiwan, the Ministry of Science and Technology of Taiwan,
and National Tsing Hua University.
This work is supported by the National Science Center, Poland under Agreement No. 2020/39/B/ST9/03494.
SB is supported by the UK Research and Innovation (UKRI) Future Leaders Fellowship (grant number MR/V023381/1).
[Akhlaghi and Ichikawa2015]akhlaghi2015 Akhlaghi M., Ichikawa T., 2015, ApJS, 220, 1.
[Akhlaghi2019a]akhlaghi2019a Akhlaghi M., 2019, ASPC, 521, 299A.
[Akhlaghi2019b]akhlaghi2019b Akhlaghi, M. 2019, arXiv:1909.11230. doi:10.48550/arXiv.1909.11230
[Atkinson et al.(2013)]atkinson2013 Atkinson, A. M., Abraham, R. G., & Ferguson, A. M. N. 2013, , 765, 28. doi:10.1088/0004-637X/765/1/28
[Belokurov et al.(2006)]belokurov2006 Belokurov, V., Zucker, D. B., Evans, N. W., et al. 2006, , 642, L137
[Bílek et al.(2020)]bilek2020 Bílek, M., Duc, P.-A., Cuillandre, J.-C., et al. 2020, , 498, 2138. doi:10.1093/mnras/staa2248
[Cooper et al.(2010)]cooper2010 Cooper, A. P., Cole, S., Frenk, C. S., et al. 2010, , 406, 744. doi:10.1111/j.1365-2966.2010.16740.x
[Cooper et al.(2017)]cooper2017 Cooper, A. P., Cole, S., Frenk, C. S., et al. 2017, , 469, 1691. doi:10.1093/mnras/stx955
[Dey et al.(2019)]dey2019 Dey, A., Schlegel, D. J., Lang, D. et al. 2019, , 157, 168. doi:10.3847/1538-3881/ab089d
[Duc et al.(2015)]duc2015 Duc, P.-A., Cuillandre, J.-C., Karabal, E., et al. 2015, , 446, 120. doi:10.1093/mnras/stu2019
[Ferguson et al.(2022)]ferguson2022 Ferguson, P. S., Shipp, N., Drlica-Wagner, A., et al. 2022, , 163, 18. doi:10.3847/1538-3881/ac3492
[Giri et al.(2023)]giri2023 Giri, G., Barway, S., & Raychaudhury, S. 2023, , 520, 5870. doi:10.1093/mnras/stad474
[Grand et al.(2017)]grand2017 Grand, R. J. J., Gómez, F. A., Marinacci, F., et al. 2017, , 467, 179. doi:10.1093/mnras/stx071
[Grand et al.(2024)]grand2024 Grand, R. J. J., Fragkoudi, F., Gómez, F. A., et al. 2024, , 532, 1814. doi:10.1093/mnras/stae1598
[Guo & White(2008)]guo2008 Guo, Q. & White, S. D. M. 2008, , 384, 2. doi:10.1111/j.1365-2966.2007.12619.x
[Hellwing et al.(2016)]hellwing2016 Hellwing, W. A., Frenk, C. S., Cautun, M., et al. 2016, , 457, 3492. doi:10.1093/mnras/stw214
[Hood et al.(2018)]hood2018 Hood, C. E., Kannappan, S. J., Stark, D. V., et al. 2018, , 857, 144. doi:10.3847/1538-4357/aab719
[Hunt et al.(2024)]hunt2024 Hunt, L. K., Annibali, F., Cuillandre, J.-C., et al. 2024, arXiv:2405.13499. doi:10.48550/arXiv.2405.13499
[Ibata et al.(2021)]ibata2021a Ibata, R., Malhan, K., Martin, N., et al. 2021, , 914, 123. doi:10.3847/1538-4357/abfcc2
[Jackson et al.(2022)]jackson2022 Jackson, R. A., Kaviraj, S., Martin, G., et al. 2022, , 511, 607. doi:10.1093/mnras/stac058
[Johnston et al.(2008)]johnston2008 Johnston, K. V., Bullock, J. S., Sharma, S., et al. 2008, , 689, 936. doi:10.1086/592228
[Khalid et al.(2024)]khalid2024 Khalid, A., Brough, S., Martin, G., et al. 2024, , 530, 4422. doi:10.1093/mnras/stae1064
[Lacey et al.(2016)]lacey2016 Lacey, C. G., Baugh, C. M., Frenk, C. S., et al. 2016, , 462, 3854. doi:10.1093/mnras/stw1888
[Laureijs & Euclid Collaboration(2018)]euclid2018 Laureijs, R. & Euclid Collaboration 2018, Peering towards Cosmic Dawn, 333, 238. doi:10.1017/S1743921318000595
[Li et al.(2022)]li2022 Li, T. S., Ji, A. P., Pace, A. B., et al. 2022, , 928, 30. doi:10.3847/1538-4357/ac46d3
[Makarov et al.(2014)]makarov2014 Makarov, D., Prugniel, P., Terekhova, N., et al. 2014, , 570, A13. doi:10.1051/0004-6361/201423496
[Mancillas et al.(2019)]mancillas2019 Mancillas, B., Duc, P.-A., Combes, F., et al. 2019, , 632, A122. doi:10.1051/0004-6361/201936320
[Martin et al.(2022)]martin2022 Martin, G., Bazkiaei, A. E., Spavone, M., et al. 2022, , 513, 1459. doi:10.1093/mnras/stac1003
[Martínez-Delgado et al.(2010)]martinez-delgado2010 Martínez-Delgado, D., Gabany, R. J., Crawford, K., et al. 2010, , 140, 962. doi:10.1088/0004-6256/140/4/962
[Martínez-Delgado(2019)]martinez-delgado2019 Martínez-Delgado, D. 2019, Highlights on Spanish Astrophysics X, 146. doi:10.48550/arXiv.1811.12286
[Martínez-Delgado et al.(2023)]martinez-delgado2023 Martínez-Delgado, D., Roca-Fàbrega, S., Miró-Carretero, J., et al. 2023, , 669, A103. doi:10.1051/0004-6361/202244832
[Martínez-Delgado et al.(2023)]martinez-delgado2023b Martínez-Delgado, D., Cooper, A. P., Román, J., et al. 2023, , 671, A141. doi:10.1051/0004-6361/202245011
[Miró-Carretero et al.(2023)]miro-carretero2023 Miró-Carretero, J., Martínez-Delgado, D., Farràs-Aloy, S., et al. 2023, , 669, L13. doi:10.1051/0004-6361/202245003
[Miro-Carretero et al.(2024)]miro-carretero2024 Miro-Carretero, J., Martinez-Delgado, D., Gomez-Flechoso, M. A., et al. 2024, arXiv:2407.20483. doi:10.48550/arXiv.2407.20483
[Miskolczi et al.(2011)]miskolczi2011 Miskolczi, A., Bomans, D. J., & Dettmar, R.-J. 2011, , 536, A66. doi:10.1051/0004-6361/201116716
[Morales et al.(2018)]morales2018 Morales, G., Martínez-Delgado, D., Grebel, E. K., et al. 2018, , 614, A143. doi:10.1051/0004-6361/201732271
[Nelson et al.(2019)]nelson2019 Nelson, D., Pillepich, A., Springel, V., et al. 2019, , 490, 3234. doi:10.1093/mnras/stz2306
[Nelson et al.(2019a)]nelson2019a Nelson, D., Springel, V., Pillepich, A., et al. 2019, Computational Astrophysics and Cosmology, 6, 2. doi:10.1186/s40668-019-0028-x
[Newberg & Carlin(2016)]newberg2016 Newberg, H. J. & Carlin, J. L. 2016, Tidal Streams in the Local Group and Beyond, 420. doi:10.1007/978-3-319-19336-6
[Pillepich et al.(2018)]pillepich2018 Pillepich, A., Springel, V., Nelson, D., et al. 2018, , 473, 4077. doi:10.1093/mnras/stx2656
[Pillepich et al.(2019)]pillepich2019 Pillepich, A., Nelson, D., Springel, V., et al. 2019, , 490, 3196. doi:10.1093/mnras/stz2338
[Planck Collaboration et al.(2014)]planck2014 Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2014, , 571, A16. doi:10.1051/0004-6361/201321591
[Revaz(2013)]revaz2013 Revaz, Y. 2013, Astrophysics Source Code Library. ascl:1302.004
[Román et al.(2020)]roman2020 Román, J., Trujillo, I., & Montes, M. 2020, , 644, A42
[Rutherford et al.(2024)]rutherford2024 Rutherford, T. H., van de Sande, J., Croom, S. M., et al. 2024, , 529, 810. doi:10.1093/mnras/stae398
[Roca-Fàbrega et al.(2024)]roca-fabrega2024 Roca-Fàbrega, S., Kim, J.-H., Primack, J. R., et al. 2024, , 968, 125. doi:10.3847/1538-4357/ad43de
[Schaye et al.(2015)]schaye2015 Schaye, J., Crain, R. A., Bower, R. G., et al. 2015, , 446, 521. doi:10.1093/mnras/stu2058
[Sheth et al.(2010)]sheth2010 Sheth, K., Regan, M., Hinz, J. L., et al. 2010, , 122, 1397. doi:10.1086/657638
[Shipp et al.(2018)]shipp2018 Shipp, N., Drlica-Wagner, A., Balbinot, E., et al. 2018, , 862, 114
[Shipp et al.(2023)]shipp2023 Shipp, N., Panithanpaisal, N., Necib, L., et al. 2023, , 949, 44. doi:10.3847/1538-4357/acc582
[Skryabina et al.(2024)]skryabina2024 Skryabina, M. N., Adams, K. R., & Mosenkov, A. V. 2024, , 532, 883. doi:10.1093/mnras/stae1502
[Sola et al.(2022)]sola2022 Sola, E., Duc, P.-A., Richards, F., et al. 2022, , 662, A124. doi:10.1051/0004-6361/202142675
[Toomre & Toomre(1972)]toomre1972 Toomre, A. & Toomre, J. 1972, , 178, 623. doi:10.1086/151823
[Valenzuela & Remus(2024)]valenzuela2024 Valenzuela, L. M. & Remus, R.-S. 2024, , 686, A182. doi:10.1051/0004-6361/202244758
[Vera-Casanova et al.(2022)]vera-casanova2022 Vera-Casanova, A., Gómez, F. A., Monachesi, A., et al. 2022, , 514, 4898. doi:10.1093/mnras/stac1636
[Vogelsberger et al.(2020)]vogelsberger2020 Vogelsberger, M., Marinacci, F., Torrey, P., et al. 2020, Nature Reviews Physics, 2, 42. doi:10.1038/s42254-019-0127-2
[Weinberger et al.(2017)]weinberger2017 Weinberger, R., Springel, V., Hernquist, L., et al. 2017, , 465, 3291. doi:10.1093/mnras/stw2944
§ MOCK-IMAGES PHOTOMETRY
|
http://arxiv.org/abs/2409.03505v1 | 20240905131908 | Survey of Data-driven Newsvendor: Unified Analysis and Spectrum of Achievable Regrets | [
"Zhuoxin Chen",
"Will Ma"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
A Physics-Informed Machine Learning Approach for Solving Distributed Order Fractional Differential Equations
Alireza Afzal Aghaeifn1cor1
============================================================================================================
§ ABSTRACT
In the Newsvendor problem, the goal is to guess the number that will be drawn from some distribution, with asymmetric consequences for guessing too high vs. too low.
In the data-driven version, the distribution is unknown, and one must work with samples from the distribution.
Data-driven Newsvendor has been studied under many variants: additive vs. multiplicative regret, high probability vs. expectation bounds, and different distribution classes.
This paper studies all combinations of these variants, filling in many gaps in the literature and simplifying many proofs.
In particular, we provide a unified analysis based on the notion of clustered distributions, which in conjunction with our new lower bounds,
shows that the entire spectrum of regrets between 1/√(n) and 1/n can be possible.
§ INTRODUCTION
1.5
In decision-making under uncertainty, one chooses an action a in the face of an uncertain outcome Z, and the loss incurred ℓ(a,Z) follows a given function ℓ.
In stochastic optimization, the outcome Z is drawn from a known distribution F, and the goal is to minimize the expected loss _Z∼ F[ℓ(a,Z)].
We let L(a) denote the expected loss of an action a, and a^* denote an optimal action for which L(a^*)=inf_a L(a).
In data-driven optimization, the distribution F is unknown, and one must instead work with independent and identically distributed (IID) samples drawn from F.
A data-driven algorithm prescribes an action based on these samples, and one is interested in how its expected loss L() compares to the optimal expected loss L(a^*) from stochastic optimization.
This comparison can be made in a multitude of ways, differing along various dimensions.
First, one can measure either the difference L()-L(a^*) which is called the additive regret, or the scaled difference (L()-L(a^*))/L(a^*) which is called the multiplicative regret.
Second, note that both of these regrets are random variables, because L() depends on the IID samples drawn;
therefore, one can analyze either the probability that the regret is below some threshold, or analyze the expected regret.
Finally, different restrictions can be placed on the unknown distribution F.
In this paper we consider the multitude of ways in which L() has been compared to L(a^*) in the data-driven Newsvendor problem, starting with the work of <cit.>.
In the Newsvendor problem, action a represents an amount of inventory to stock, and outcome Z represents an uncertain demand to occur.
The loss function is given by
ℓ(a,Z)=c_umax{Z-a,0}+c_omax{a-Z,0},
where c_u,c_o>0 represent the unit costs of understocking, overstocking respectively.
The goal in Newsvendor is to stock inventory close to demand, but err on the side of understocking or overstocking depending on how the costs c_u,c_o compare.
The optimal action when F is known involves defining q=c_u/c_u+c_o, and then setting a^* to be a q'th percentile realization from F, with q being called the critical quantile.
§.§ Existing and New Results
We first define a restriction to be placed on the unknown distribution F, that is
similar to the notion of clustered distributions from <cit.> but used for a completely different problem.
Fix a Newsvendor loss function with critical quantile q∈(0,1).
For constants β∈[0,∞] and γ,ζ>0, a distribution with CDF F is said to be (β,γ,ζ)-clustered if
|a-a^*| ≤1/γ|F(a)-q|^1/β+1 ∀ a∈[a^*-ζ,a^*+ζ].
We make the following remarks on the definition of (β,γ,ζ)-clustered distributions.
* Recall that a^* is an optimal action that is a q'th percentile realization from F. Intuitively, constraint (<ref>) is saying that for an action a far away from the optimal a^*, its quantile F(a) must also be far away from the critical quantile q. This helps data-driven Newsvendor algorithms, whose actions typically come with a guarantee on how close F() is to q, because constraint (<ref>) now implies a second guarantee on how close is to a^*.
* Because |F(a)-q|≤1, constraint (<ref>) is looser for larger β. It is not restricting F at all when β=∞. On the other extreme, constraint (<ref>) is most restrictive when β=0, but is implied by F having a density that is at least γ over the interval [a^*-ζ,a^*+ζ] <cit.>, and our constraint is less restrictive because it does not impose the distribution to have a lower-bounded density over the entire interval.
* Because F(a)∈[0,1], in order for there to exist any distributions satisfying (<ref>), one must have ζ≤1/γ(min{q,1-q})^1/β+1. Therefore we will generally assume this about the parameters of (β,γ,ζ)-clustered distributions.
Having defined (β,γ,ζ)-clustered distributions, our main results are summarized in <Ref>.
To elaborate, we consider the standard Sample Average Approximation (SAA) algorithm for Newsvendor, which sets equal to the q'th percentile of the empirical distribution formed by n IID samples.
We provide upper bounds on its additive and multiplicative regrets, that hold with high probability (i.e., with probability at least 1-δ for some small δ) and in expectation.
The O(·) notation highlights the dependence on n and δ, noting that the parameter β affects the rate of convergence as n→∞, whereas the other parameters q,γ,ζ may only affect the constants in front which are second order and hidden.
We recover convergence rates of n^-1/2 when β=∞ and n^-1 when β=0, which were previously known[These results are sometimes stated in terms of cumulative regret in their respective papers, in which case the n^-1/2 rate translates to ∑_n=1^N n^-1/2=Θ(√(N)) cumulative regret while the n^-1 rate translates to ∑_n=1^N n^-1=Θ(log N) cumulative regret.] in some cases as outlined in <Ref>.
Our results establish these convergence rates in all cases, unifying the literature, and moreover showing that the entire spectrum of rates from 1/√(n) (slowest) to 1/n (fastest) is possible as β ranges from ∞ to 0.
Our general upper bound of n^-β+2/2β+2 was achieved by the SAA algorithm, which did not need to know any of the parameters β,γ,ζ for the clustered distributions.
Meanwhile, our lower bound states that even knowing these parameters, any data-driven algorithm that draws n samples will incur Ω(n^-β+2/2β+2) additive regret with a constant probability. This is then translated into similar lower bounds for multiplicative regret and in expectation.
Technical Highlights.
Our high-probability upper bounds are proven using the fact that F() is usually close to q, which follows the proof framework of <cit.>. We extend their analysis to additive regret, and also show how to exploit assumptions about lower-bounded density (i.e., β=0) under this proof framework.
Moreover, we introduce the notion of clustered distributions for data-driven Newsvendor, which connects the two extremes cases of no assumption (β=∞) and lower-bounded density (β=0).
Our expectation upper bounds are proven by analyzing an integral (see (<ref>)) which follows <cit.>, who bounded the expected additive regret for β=0,∞.
We unify their results by considering all β∈[0,∞], and our β=0 result additionally allows for discrete distributions that are (0,γ,ζ)-clustered, instead of imposing that the distribution has a density.
Our proof also uses Chebyshev's inequality to provide tail bounds for extreme quantiles, which in our opinion simplifies the proof from <cit.>.
Finally, we recycle their integral to analyze expected multiplicative regret, which when β=∞ leads to a simplified proof of <cit.> on the exact worst-case expected multiplicative regret of SAA.
Our lower bound is based on a single construction that establishes the tight rate of Θ(n^-β+2/2β+2) for the entire spectrum of β∈[0,∞].
We construct distributions with low Hellinger distance between them <cit.>, which leads to simpler distributions and arguably simpler analysis compared to other lower bounds in the data-driven Newsvendor literature (e.g. <cit.>, <cit.>, <cit.>). We also emphasize that in the special case where β=0, our lower bound of Ω(1/n) requires only two candidate distributions, instead of a continuum of candidate distributions and Bayesian inference à la the van Trees inequality <cit.>. We provide a self-contained construction for β=0 using continuous distributions in <Ref>.
§.§ Further Related Work
Learning theory.
Sample complexity has roots in statistical learning theory, which typically studies classification and regression problems under restricted hypothesis classes <cit.>.
Its concepts can also be extended to general decision problems <cit.>, or even specific inventory policy classes <cit.>.
However, data-driven Newsvendor results differ by considering multiplicative regret, having a specialized but unbounded loss function (there are no assumptions on demand being bounded), and typically requiring analyses that are tighter than uniform convergence.
In data-driven Newsvendor, it is also difficult to directly convert high-probability bounds into expectation bounds via an integral <cit.>, because the regret can be unbounded, while high-probability bounds only hold for small values of (or equivalently, large values of n).
Our results further differ by considering specific restrictions on the distribution F.
Generalizations of data-driven Newsvendor.
Big-data Newsvendor is a generalization of data-driven Newsvendor where past demand samples are accompanied by contextual information, and the inventory decision can be made knowing the future context.
This model was popularized by <cit.>, and motivated by the notion of contexts from machine learning.
Meanwhile, data-driven inventory is a generalization of data-driven Newsvendor where one is re-stocking a durable good over multiple periods, that was also considered in the original paper by <cit.>.
Further variants of this model include learning censored demands <cit.>, capacitated order sizes <cit.>, lost sales <cit.>, and pricing <cit.>.
Our paper focuses on a single period without contexts, and does not aim to cover these generalizations.
§ PRELIMINARIES
In the Newsvendor problem, we make an ordering decision a, and then a random demand Z is drawn from a distribution with CDF F.
The domain for a, Z, and F is [0,∞).
The loss when we order a and demand realizes to be Z is defined as
ℓ(a,Z)=qmax{Z-a,0}+(1-q)max{a-Z,0},
for some known q∈(0,1), where we have normalized the unit costs of understocking, overstocking to be q,1-q respectively so that the critical quantile (as defined in the Introduction) is exactly q.
The expected loss of a decision a can be expressed as
L(a)=_Z∼ F[ℓ(a,Z)]=∫_0^a (1-q)F(z) dz + ∫_a^∞ q(1-F(z))dz
following standard derivations based on Riemann-Stieltjes integration by parts.
We assume throughout that distribution F has finite mean; otherwise the expected loss of any decision is infinite.
The objective is to find an ordering decision a that minimizes the loss function L(a).
It is well-known that an ordering decision a is optimal if F(a)=q.
In general there can be multiple optimal solutions, or no decision a for which F(a) equals q exactly.
Regardless, an optimal solution a^*=F^-1(q)=inf{a:F(a)≥ q} can always be defined based on the inverse CDF, which takes the smallest optimal solution if there are multiple.
We note that by right-continuity of the CDF function, we have F(a^*)≥ q, and F(a)<q for all a<a^*.
In the data-driven Newsvendor problem, the distribution F is unknown, and instead must be inferred from n demand samples Z_1,…,Z_n that are drawn IID from F.
A general algorithm for data-driven Newsvendor is a (randomized) mapping from the demand samples drawn to a decision.
We primarily consider the Sample Average Approximation (SAA) algorithm, which constructs the empirical CDF (z)=1/n∑_i=1^n (Z_i≤ z) over z≥0 based on the samples, and then makes the decision =^-1(q)=inf{a:(a)≥ q}.
Similarly, we have ()≥ q, and (a)<q for all a<.
We are interested in the difference L()-L(a^*), which measures the loss of the SAA decision in excess of that of the optimal decision a^*. From (<ref>), we can see that
L()-L(a^*)
=∫_^a^* (q(1-F(z))-(1-q)F(z)) dz, if ≤ a^*
∫_a^*^ ((1-q)F(z)-q(1-F(z))) dz, if >a^*
=∫_^a^* (q-F(z)) dz.
We note L()-L(a^*) is a random variable, depending on the random demand samples drawn.
If we want to calculate its expectation, then from the linearity of expectation we can see that
[L()]-L(a^*)
= [∫_0^∞((1-q)F(z)( > z) + q(1-F(z))(≤ z))dz]
-∫_0^a^* (1-q)F(z) dz -∫_a^*^∞ q(1-F(z))dz
= ∫_0^∞ ((F(z) - qF(z))[ > z] + (q - qF(z))[≤ z])dz
-∫_0^a^* (F(z)-q F(z)) dz -∫_a^*^∞ (q-q F(z))dz
= ∫_0^a^* (F(z)[ > z] + q[≤ z]-F(z)) dz + ∫_a^*^∞ (F(z)[ > z] + q[≤ z]-q)dz
= ∫_0^a^* (q-F(z))[≤ z] dz + ∫_a^*^∞ (F(z)-q)[ > z] dz
= ∫_0^a^*(q-F(z))[(z)≥ q]dz+∫_a^*^∞(F(z)-q)[(z)<q]dz.
To explain the final equality that leads to expression (<ref>):
if (z)≥ q, then =inf{a:(a)≥ q}≤ z from definition; otherwise, if (z)<q, then it is not possible for inf{a:(a)≥ q} to be as small as z because the function is monotonic and right-continuous.
Hereafter we work only with expressions (<ref>), (<ref>), and (<ref>), omitting the random variable Z and implicitly capturing the dependence on random variables Z_1,…,Z_n through the empirical CDF .
§ HIGH-PROBABILITY UPPER BOUNDS
We first upper-bound the additive regret L()-L(a^*) incurred by the SAA algorithm.
When β<∞, the regret upper bound depends on the parameters β,γ from (β,γ,ζ)-clustered distributions, and the value of n at which our bound starts holding also depends on ζ.
When β=∞, parameters γ,ζ are irrelevant but the regret upper bound depends on q, being worse when q is close to 1, and also requires the distribution to have bounded mean, where we normalize this upper bound to 1.
Fix q∈(0,1) and β∈[0,∞],γ∈(0,∞),ζ∈(0,(min{q,1-q})^1/β+1/γ].
If β<∞, then whenever the number of samples satisfies n>log(2/δ)/2(γζ)^2β+2, we have
L()-L(a^*)
≤1/γ(log(2/δ)/2n)^β+2/2β+2
=O((log(1/δ)/n)^β+2/2β+2)
with probability at least 1-δ, for any δ∈(0,1) and any (β,γ,ζ)-clustered distribution.
If β=∞, then whenever the number of samples satisfies n≥2log(2/δ)/(1-q)^2, we have
L()-L(a^*)
≤2/1-q√(log(2/δ)/2n)
=O((log(1/δ)/n)^1/2)
with probability at least 1-δ, for any δ∈(0,1) and any distribution with mean at most 1.
By the DKW inequality <cit.>, we know that
[sup_a≥0|(a)-F(a)|≤√(log(2/δ)/2n)]
≥ 1-2exp(-2n(√(log(2/δ)/2n))^2)=1-δ.
Therefore, with probability at least 1-δ, we have
sup_a≥0|(a)-F(a)|≤√(log(2/δ)/2n).
We will show that (<ref>) implies L()-L(a^*)≤1/γ(log(2/δ)/2n)^β+2/2β+2 when β∈[0,∞) (Case 1), and (<ref>) implies L()-L(a^*)≤2/1-q√(log(2/δ)/2n) when β=∞ (Case 2).
To begin with, we note that if ≤ a^*, then
q-F()
=()-()+q-F()
≤sup_a≥0|(a)-F(a)|
where the inequality holds because ()≥ q (by right-continuity of ). Otherwise if > a^*, then
lim_a→^- (F(a)-q)
=lim_a→^- (F(a)-q+(a)-(a) )
≤sup_a≥0|(a)-F(a)|
where the inequality holds because (a)<q for all a<.
Case 1: β∈[0,∞).
From the definition of (β,γ,ζ)-clustered distributions, we have
F(a^*-ζ)≤ q-(γζ)^β+1<q-√(log(2/δ)/2n)
F(a^*+ζ)≥ q+(γζ)^β+1>q+√(log(2/δ)/2n)
where the strict inequalities hold because n>log(2/δ)/2(γζ)^2β+2.
Applying (<ref>), we deduce that (a^*-ζ)<q and (a^*+ζ)>q.
From the definition of =inf{a:(a)≥ q}, we conclude that ≥ a^*-ζ and ≤ a^*+ζ respectively,
allowing us to apply the definition of (β,γ,ζ)-clustered distributions on .
When ≤ a^*, we derive from (<ref>) that
L()-L(a^*)
≤(a^*-)(q-F())
≤1/γ(q-F())^1/β+1(q-F())
=1/γ(q-F())^β+2/β+1,
where the second inequality applies the definition of clustered distributions.
By (<ref>) and (<ref>), we know L()-L(a^*)≤1/γ(log(2/δ)/2n)^β+2/2β+2.
On the other hand, when >a^*, we derive from (<ref>) that
L()-L(a^*)
≤lim_a→^-(a-a^*)(F(a)-q)
≤lim_a→^-1/γ(F(a)-q)^1/β+1(F(a)-q)
=1/γlim_a→^-(F(a)-q)^β+2/β+1,
where the first inequality follows from properties of the Riemann integral, and the second inequality applies the definition of clustered distributions.
This is at most 1/γ(log(2/δ)/2n)^β+2/2β+2 by (<ref>) and (<ref>).
Therefore, we conclude that
L()-L(a^*)≤1/γ(log(2/δ)/2n)^β+2/2β+2
holds universally for all possible values of and a^* when β∈[0,∞).
Case 2: β=∞.
By the assumption that the mean of the distribution is no more than 1, we have
∫_0^∞(1-F(z))dz≤1.
When ≤ a^*, we derive
∫_0^∞(1-F(z))dz
≥∫_^a^*(1-F(z))dz
≥lim_a→ a^*-(a-)(1-F(a))
≥(a^*-)(1-q),
where the second inequality follows from properties of the Riemann integral, and the last inequality holds because F(a)<q for all a<a^*. This implies a^*-≤1/1-q.
Substituting into (<ref>), we have
L()-L(a^*)
=∫_^a^*(q-F(z))dz
≤(a^*-)(q-F())
≤1/1-q√(log(2/δ)/2n)≤2/1-q√(log(2/δ)/2n),
where the second inequality applies (<ref>) and (<ref>).
On the other hand, when >a^*, we similarly derive
∫_0^∞(1-F(z))dz
≥∫_a^*^(1-F(z))dz
≥(-a^*)lim_a→^-(1-F(a)),
where the second inequality is by properties of the Riemann integral.
Applying (<ref>), we obtain
(-a^*)lim_a→^-(1-F(a))≤1.
Meanwhile, we have
lim_a→^-F(a)
=lim_a→^-(F(a)-(a)+(a))
≤sup_a≥0|(a)-F(a)|+lim_a→^-(a)
≤√(log(2/δ)/2n)+q,
where the second inequality follows from (<ref>) and the fact that (a)<q for all a<.
This is at most 1-q/2+q=1+q/2 by the assumption that n≥2log(2/δ)/(1-q)^2.
Substituting back into (-a^*)lim_a→^-(1-F(a))≤1, we derive -a^*≤1/1-1+q/2=2/1-q.
Substituting the final derivation into (<ref>), we get
L()-L(a^*)
=∫_^a^*(q-F(z))dz
≤(-a^*)lim_a→^-(F(a)-q)
≤2/1-q√(log(2/δ)/2n),
where the first inequality follows from the properties of the Riemann integral, and the second inequality uses (<ref>) and (<ref>).
Therefore, we conclude that
L()-L(a^*)≤2/1-q√(log(2/δ)/2n)
holds when β=∞.
We now upper-bound the multiplicative regret L()-L(a^*)/L(a^*) incurred by the SAA algorithm. For multiplicative regret and β<∞, we need the further assumption that F(a^*-ζ),F(a^*+ζ) are bounded away from 0,1 respectively, to prevent the denominator L(a^*) from becoming too small.
In contrast to <Ref>, the regret upper bound for β<∞ now depends additionally on parameters ζ and τ, and the regret upper bound for β=∞ now worsens when q is close to 0 or 1 (whereas before it only worsened when q is close to 1). This worsening when q is close to 0 or 1 has been shown to be necessary for multiplicative regret <cit.>.
Fix q∈(0,1) and β∈[0,∞],γ∈(0,∞),ζ∈(0,(min{q,1-q})^1/β+1/γ),τ∈(0,min{q,1-q}-(γζ)^β+1].
If β<∞, then whenever the number of samples satisfies n>log(2/δ)/2(γζ)^2β+2, we have
L()-L(a^*)/L(a^*)≤1/γζτ(log(2/δ)/2n)^β+2/2β+2
=O((log(1/δ)/n)^β+2/2β+2)
with probability at least 1-δ, for any δ∈(0,1) and any (β,γ,ζ)-clustered distribution satisfying F(a^*-ζ)≥τ, F(a^*+ζ)≤1-τ.
If β=∞, then whenever the number of samples satisfies n>log(2/δ)/2(min{q,1-q})^2, we have
L()-L(a^*)/L(a^*)≤2/min{q,1-q}√(2n/log(2/δ))-1
=O((log(1/δ)/n)^1/2)
with probability at least 1-δ, for any δ∈(0,1) and any distribution (with finite mean).
The β=∞ case was studied in <cit.>, who establish that n≥9/^2log(2/δ)/2(min{q,1-q})^2 samples is sufficient to guarantee a multiplicative regret at most , for ≤1.
In order to make our error bound of 2/min{q,1-q}√(2n/log(2/δ))-1 at most , we need n≥(2+)^2/^2log(2/δ)/2(min{q,1-q})^2, which always satisfies our condition of n>log(2/δ)/2(min{q,1-q})^2.
Therefore, the β=∞ case of our <Ref> can be viewed as an improvement over <cit.>, that holds for all >0, and moreover shows that a smaller constant is sufficient for ≤ 1 (because (2+)^2/^2≤9/^2).
We note however that a better dependence on min{q,1-q} was established in <cit.> for ≤1.
For β∈[0,∞), we derive from (<ref>) that
L(a^*) =∫_0^a^*(1-q)F(z)dz+∫_a^*^∞ q(1-F(z))dz
≥∫_a^*-ζ^a^*(1-q)F(z)dz+∫_a^*^a^*+ζ q(1-F(z))dz
≥∫_a^*-ζ^a^*(1-q)F(a^*-ζ)dz+∫_a^*^a^*+ζ q(1-F(a^*+ζ))dz
≥∫_a^*-ζ^a^*(1-q)τ dz+∫_a^*^a^*+ζ qτ dz
=ζτ,
where the last inequality follows from the assumptions that F(a^*-ζ)≥τ and F(a^*+ζ)≤1-τ.
By <Ref>, we know that with probability at least 1-δ,
L()-L(a^*)≤1/γ(log(2/δ)/2n)^β+2/2β+2.
Therefore, we have that with probability at least 1-δ,
L()-L(a^*)/L(a^*)≤1/γζτ(log(2/δ)/2n)^β+2/2β+2.
The proof for β=∞ is deferred to <Ref>, due to similarities with <cit.>.
§ EXPECTATION UPPER BOUNDS
We first upper-bound the expected additive regret [L()]-L(a^*) incurred by the SAA algorithm. In contrast to <Ref>, here our regret upper bound for β<∞ depends on all three parameters β,γ,ζ and holds for all values of n, and also requires the distribution to have mean at most 1. The regret upper bound for β=∞ still only has an inverse dependence on 1-q but not q.
Fix q∈(0,1) and β∈[0,∞],γ∈(0,∞),ζ∈(0,(min{q,1-q})^1/β+1/γ].
If β<∞, then we have
[L()]-L(a^*) ≤2/γ(1/β+1+1/√(e))(1/2√(n))^β+2/β+1+q+1/n(γζ)^β+1=O(n^-β+2/2β+2)
for any (β,γ,ζ)-clustered distribution with mean at most 1 and any number of samples n.
If β=∞, then we have
[L()]-L(a^*)
≤(1/√(e)+2)1/(1-q)√(n)
=O(n^-1/2)
for any distribution with mean at most 1 and any number of samples n.
For the β=0 and β=∞ cases, respective upper bounds of (n+q/1-q)exp[-2n(γζ)^2]+(2/1-q+1/2γ)1/n <cit.> and 4/(1-q)√(n) <cit.> were previously known,
both also under the assumption that the distribution has bounded[<cit.> assumed an upper bound of μ on the mean, instead of normalizing it to 1. They also did not normalize the unit costs of understocking and overstocking to sum to 1. The bounds we compare with here are obtained by substituting μ=1, ρ=q, and b+h=1 into their bounds.] mean. Our upper bound for β=0 requires a less restrictive condition (based on clustered distributions) than the positive density condition in <cit.>.
Our upper bound for β=∞ has a better constant in front of 1/(1-q)√(n), because 1/√(e)+2<4.
We first consider the case where β=∞, and then the case where β∈[0,∞).
Case 1: β=∞.
Let a'=inf{a:F(a)≥ q+1-q/2√(n)}. We know a'≥ a^* from the definition of a^*=inf{a:F(a)≥ q}. Therefore, we derive from (<ref>) that
[L()]-L(a^*)
=∫_0^a^*(q-F(z))[(z)≥ q]dz+∫_a^*^∞(F(z)-q)[(z)<q]dz
=∫_0^a^*(q-F(z))[(z)≥ q]dz+
∫_a^*^a'(F(z)-q)[(z)<q]dz+
∫_a'^∞(F(z)-q)[(z)<q]dz.
We note that if z<a^*, then q-F(z)>0 by definition of a^*, and we have
[(z)≥ q]
=[(z)-F(z)≥ q-F(z)]
≤exp(-2n(q-F(z))^2),
where the inequality follows from Hoeffding's inequality <cit.>. Otherwise if z≥ a^*, then F(z)-q≥0 by definition of a^*, and we have
[(z)< q]
=[F(z)-(z)> F(z)-q]
≤exp(-2n(F(z)-q)^2),
where the inequality again applies Hoeffding's inequality. So the first two terms in (<ref>) sum up to
∫_0^a^*(q-F(z))[(z)≥ q]dz+∫_a^*^a'(F(z)-q)[(z)<q]dz
≤∫_0^a'|q-F(z)|exp(-2n|q-F(z)|^2)dz
≤a'/2√(en),
where the last inequality holds because the function g(x)=xe^-2nx^2 is at most 1/2√(en) for all x≥0.
Meanwhile, we derive
∫_0^∞ (1-F(z))dz
≥∫_0^a' (1-F(z))dz
≥lim_a→ a'^- a(1-F(a))
≥ a'(1-q-1-q/2√(n))
≥a'(1-q)/2,
where the second inequality follows from properties of the Riemann integral, the third inequality holds because F(a)<q+1-q/2√(n) for all a<a', and the last inequality holds for all positive integer n. Following the assumption that the mean of the distribution is no more than 1, we apply (<ref>) to deduce that a'≤2/1-q. Substituting this into (<ref>), we have
∫_0^a^*(q-F(z))[(z)≥ q]dz+
∫_a^*^a'(F(z)-q)[(z)<q]dz
≤1/(1-q)√(en).
For the third term in (<ref>), we have
∫_a'^∞(F(z)-q)[(z)<q]dz
=∫_a'^∞(F(z)-q)[1/n∑_i=1^n(Z_i≤ z)<q]dz
=∫_a'^∞(F(z)-q)[1/n(n,F(z))< q]dz
=∫_a'^inf{a:F(a)=1}(1-F(z))dz(F(z)-q)[1/n(n,F(z))< q]/1-F(z)
≤sup_F∈[q+1-q/2√(n),1)(F-q)[1/n(n,1-F)≥ 1-q]/1-F,
where (n,F(z)) is a binomial random variable with parameters n and F(z), the second equality follows from the independence of samples, the third equality follows because [1/n(n,F(z))< q]=0 if F(z)=1, and the inequality uses ∫_a'^∞(1-F(z))dz≤∫_0^∞(1-F(z))dz≤1 by the assumption that the mean of the distribution is at most 1.
Consider a random variable X defined as 1/n(n,1-F). The expected value and variance of X are given by [X]=1-F and Var(X)=F(1-F)/n respectively. By Chebyshev's inequality <cit.>, we obtain that for all F∈[q+1-q/2√(n),1),
[1/n(n,1-F)≥ 1-q]
=[X≥ 1-q]
≤[|X-(1-F)|≥ F-q]
≤F(1-F)/n(F-q)^2.
Plugging it into (<ref>), we have
∫_a'^∞(F(z)-q)[(z)<q]dz
≤sup_F∈[q+1-q/2√(n),1)F/n(F-q)≤2/(1-q)√(n).
Combining (<ref>) and (<ref>), we have
[L()]-L(a^*)≤(1/√(e)+2)1/(1-q)√(n).
Case 2: β∈[0,∞).
We decompose [L()]-L(a^*) into three separate parts as follows.
By (<ref>),
[L()]-L(a^*)
=∫_0^a^*(q-F(z))[(z)≥ q]dz+∫_a^*^∞(F(z)-q)[(z)<q]dz
=∫_0^a^*-ζ(q-F(z))[(z)≥ q]dz+∫_a^*-ζ^a^*(q-F(z))[(z)≥ q]dz
+∫_a^*^a^*+ζ(F(z)-q)[(z)<q]dz+∫_a^*+ζ^∞(F(z)-q)[(z)<q]dz
≤∫_0^a^*-ζ(q-F(z))[(z)≥ q]dz
+∫_a^*-ζ^a^*+ζ|q-F(z)|exp[-2n|q-F(z)|^2]dz
+∫_a^*+ζ^∞(F(z)-q)[(z)<q]dz,
where the inequality is by Hoeffding's inequality.
We then analyze (<ref>), (<ref>), and (<ref>) separately.
For (<ref>), similar with the analysis of the third term in (<ref>) for the case where β=∞, we derive
∫_0^a^*-ζ(q-F(z))[(z)≥ q]dz
=∫_0^a^*-ζ(q-F(z))[1/n(n,F(z))≥ q]dz
=∫_0^a^*-ζ(1-F(z))dz(q-F(z))[1/n(n,F(z))≥ q]/1-F(z)
≤sup_F∈[0,q-(γζ)^β+1](q-F)[1/n(n,F)≥ q]/1-F
≤sup_F∈[0,q-(γζ)^β+1]F/n(q-F)
≤q/n(γζ)^β+1,
where the first inequality uses ∫_0^a^*-ζ(1-F(z))dz≤∫_0^∞(1-F(z))dz≤1 and F(a^*-ζ)≤ q-(γζ)^β+1 (by definition of clustered distributions), and the second inequality is by Chebyshev's inequality.
Similarly, for (<ref>) we have
∫_a^*+ζ^∞(F(z)-q)[(z)<q]dz
=∫_a^*+ζ^∞(F(z)-q)[1/n(n,F(z))< q]dz
=∫_a^*+ζ^inf{a:F(a)=1}(1-F(z))dz(F(z)-q)[1/n(n,F(z))< q]/1-F(z)
≤sup_F∈[q+(γζ)^β+1,1)(F-q)[1/n(n,1-F)≥ 1-q]/1-F
≤sup_F∈[q+(γζ)^β+1,1)F/n(F-q)
≤1/n(γζ)^β+1,
where the second equality follows because [1/n(n,F(z))< q]=0 if F(z)=1, the first inequality holds because ∫_a^*+ζ^∞(1-F(z))dz≤∫_0^∞(1-F(z))dz≤1 and F(a^*+ζ)≥ q+(γζ)^β+1 by definition of clustered distributions, and the second inequality follows from Chebyshev's inequality.
To analyze (<ref>), we need to consider two cases. When 1/2√(n)≥(γζ)^β+1, we know that ζ≤1/γ(1/2√(n))^1/β+1. Because the function g(x)=xe^-2nx^2 is at most 1/2√(en) for all x≥0, we obtain
∫_a^*-ζ^a^*+ζ|q-F(z)|exp[-2n|q-F(z)|^2]dz
≤∫_a^*-ζ^a^*+ζ1/2√(en)dz
=ζ/√(en)≤2/γ√(e)(1/2√(n))^β+2/β+1.
On the other hand, for the case where 1/2√(n)<(γζ)^β+1, we know that 1/γ(1/2√(n))^1/β+1<ζ. Therefore, we can decompose (<ref>) into the following three terms:
∫_a^*-ζ^a^*+ζ|q-F(z)|exp[-2n|q-F(z)|^2]dz
=∫_a^*-ζ^a^*-1/γ(1/2√(n))^1/β+1|q-F(z)|exp[-2n|q-F(z)|^2]dz
+∫_a^*-1/γ(1/2√(n))^1/β+1^a^*+1/γ(1/2√(n))^1/β+1|q-F(z)|exp[-2n|q-F(z)|^2]dz
+∫_a^*+1/γ(1/2√(n))^1/β+1^a^*+ζ|q-F(z)|exp[-2n|q-F(z)|^2]dz .
When z∈[a^*-ζ,a^*-1/γ(1/2√(n))^1/β+1], we have
|q-F(z)|
≥(γ|z-a^*|)^β+1≥(γ|a^*-(a^*-1/γ(1/2√(n))^1/β+1)|)^β+1
=1/2√(n),
where the first inequality follows from definition of clustered distributions. Meanwhile, because the function g(x)=xe^-2nx^2 is monotonically decreasing on the interval [1/2√(n),∞), we obtain
∫_a^*-ζ^a^*-1/γ(1/2√(n))^1/β+1|q-F(z)|exp[-2n|q-F(z)|^2]dz
≤∫_a^*-ζ^a^*-1/γ(1/2√(n))^1/β+1(γ|z-a^*|)^β+1exp[-2n(γ|z-a^*|)^2(β+1)]dz.
Similarly, we derive that
∫_a^*+1/γ(1/2√(n))^1/β+1^a^*+ζ|q-F(z)|exp[-2n|q-F(z)|^2]dz
≤∫_a^*+1/γ(1/2√(n))^1/β+1^a^*+ζ(γ|z-a^*|)^β+1exp[-2n(γ|z-a^*|)^2(β+1)]dz.
Therefore, we can sum (<ref>) and (<ref>) to get
(∫_a^*-ζ^a^*-1/γ(1/2√(n))^1/β+1+∫_a^*+1/γ(1/2√(n))^1/β+1^a^*+ζ)|q-F(z)|exp[-2n|q-F(z)|^2]dz
≤(∫_a^*-ζ^a^*-1/γ(1/2√(n))^1/β+1+∫_a^*+1/γ(1/2√(n))^1/β+1^a^*+ζ)
(γ|z-a^*|)^β+1exp[-2n(γ|z-a^*|)^2(β+1)]dz.
To simplify the integral, we let x denote |z-a^*|, which yields
(∫_a^*-ζ^a^*-1/γ(1/2√(n))^1/β+1+∫_a^*+1/γ(1/2√(n))^1/β+1^a^*+ζ)|q-F(z)|exp[-2n|q-F(z)|^2]dz
≤2∫_1/γ(1/2√(n))^1/β+1^ζ(γ x)^β+1exp[-2n(γ x)^2β+2]dx
=2∫_1/γ(1/2√(n))^1/β+1^ζ(γ x)^2β+1/(γ x)^βexp[-2n(γ x)^2β+2]dx
≤2/(1/2√(n))^β/β+1·exp[-2n(γ x)^2β+2]/2nγ(2β+2)|_ζ^1/γ(1/2√(n))^1/β+1
≤2/γ(β+1)(1/2√(n))^β+2/β+1,
where the last inequality holds because exp[-2n(γ x)^2β+2]|_ζ^1/γ(1/2√(n))^1/β+1≤exp[-2n(γ x)^2β+2]|_ζ^0≤1.
For (<ref>), because we have g(x)=xe^-2nx^2≤1/2√(en) for all x≥0, it follows that
∫_a^*-1/γ(1/2√(n))^1/β+1^a^*+1/γ(1/2√(n))^1/β+1|q-F(z)|exp[-2n|q-F(z)|^2]dz
≤∫_a^*-1/γ(1/2√(n))^1/β+1^a^*+1/γ(1/2√(n))^1/β+11/2√(en)dz
=2/γ√(e)(1/2√(n))^β+2/β+1.
Combining the results (<ref>) and (<ref>), we know that under the case where 1/2√(n)<(γζ)^β+1,
∫_a^*-ζ^a^*+ζ|q-F(z)|exp[-2n|q-F(z)|^2]dz
≤2/γ(1/β+1+1/√(e))(1/2√(n))^β+2/β+1.
Note that this result is strictly greater than the result 2/γ√(e)(1/2√(n))^β+2/β+1 derived in (<ref>) for the case where 1/2√(n)≥(γζ)^β+1, so for any number of samples n, we have
∫_a^*-ζ^a^*+ζ|q-F(z)|exp[-2n|q-F(z)|^2]dz
≤2/γ(1/β+1+1/√(e))(1/2√(n))^β+2/β+1.
Finally, by combining the results (<ref>), (<ref>), and (<ref>), we conclude that
[L()]-L(a^*) ≤2/γ(1/β+1+1/√(e))(1/2√(n))^β+2/β+1+q+1/n(γζ)^β+1
when β∈[0,∞).
We now upper-bound the expected multiplicative regret incurred by the SAA algorithm. For multiplicative regret and β<∞, we need the further assumption that F(a^*-ζ),F(a^*+ζ) are bounded away from 0,1 respectively, to prevent the denominator L(a^*) from becoming too small.
Fix q∈(0,1) and β∈[0,∞], γ∈(0,∞), ζ∈(0,(min{q,1-q})^1/β+1/γ), τ∈(0,min{q,1-q}-(γζ)^β+1].
If β<∞, then we have
[L()]-L(a^*)/L(a^*)≤max{1/n(γζ)^β+1min{q,1-q},2/γζτ(1/β+1+1/√(e))(1/2√(n))^β+2/β+1}=O(n^-β+2/2β+2)
for any (β,γ,ζ)-clustered distribution satisfying F(a^*-ζ)≥τ, F(a^*+ζ)≤1-τ and any number of samples n.
If β=∞, then we have
sup_F:μ(F)<∞[L()]-L(a^*)/L(a^*) =max{sup_F∈(0,q)q-F/(1-q)F[1/n(n,F)≥ q],sup_F∈[q,1)F-q/q(1-F)[1/n(n,F) < q]}
for any number of samples n, where μ(F) denotes the mean of a distribution F, and (n,F) is a binomial random variable with parameters n and F.
The β=∞ case was studied in <cit.>, who characterized the exact value of sup_F:μ(F)<∞[L()]-L(a^*)/L(a^*) (instead of merely providing an upper bound), showing it to equal the expression in (<ref>).
This expression is then shown to be O(n^-1/2).
We derive the same expression using a shorter proof that bypasses their machinery, although their machinery has other benefits such as deriving the minimax-optimal policy (which is not SAA).
We note that an exact analysis of the worst-case expected additive regret sup_F:μ(F)<∞([L()]-L(a^*)) is also possible, even in a contextual setting <cit.>, but our simplification does not appear to work there.
For β∈[0,∞), we begin by using the same decomposition of [L()]-L(a^*) as in the proof of <Ref>. By (<ref>),
[L()]-L(a^*)
≤∫_0^a^*-ζ(q-F(z))[(z)≥ q]dz
+∫_a^*-ζ^a^*+ζ|q-F(z)|exp[-2n|q-F(z)|^2]dz
+∫_a^*+ζ^∞(F(z)-q)[(z)<q]dz
≤∫_0^a^*-ζ(q-F(z))[(z)≥ q]dz+2/γ(1/β+1+1/√(e))(1/2√(n))^β+2/β+1+∫_a^*+ζ^∞(F(z)-q)[(z)<q]dz,
where the last inequality follows from (<ref>).
We similarly decompose L(a^*) into three terms as follows. By (<ref>),
L(a^*)
=∫_0^a^* (1-q)F(z) dz+∫_a^*^∞ q(1-F(z))dz
=∫_0^a^*-ζ (1-q)F(z) dz+(∫_a^*-ζ^a^* (1-q)F(z) dz+∫_a^*^a^*+ζ q(1-F(z))dz)+∫_a^*+ζ^∞ q(1-F(z))dz
≥∫_0^a^*-ζ (1-q)F(z) dz+τζ+∫_a^*+ζ^∞ q(1-F(z))dz,
where the last inequality applies (<ref>) given the assumption that F(a^*-ζ)≥τ, F(a^*+ζ)≤1-τ.
Therefore, we have
[L()]-L(a^*)/L(a^*)
≤∫_0^a^*-ζ(q-F(z))[(z)≥ q]dz+2/γ(1/β+1+1/√(e))(1/2√(n))^β+2/β+1+∫_a^*+ζ^∞(F(z)-q)[(z)<q]dz/∫_0^a^*-ζ (1-q)F(z) dz+τζ+∫_a^*+ζ^∞ q(1-F(z))dz
≤max{∫_0^a^*-ζ(q-F(z))[(z)≥ q]dz/∫_0^a^*-ζ (1-q)F(z) dz,2/γζτ(1/β+1+1/√(e))(1/2√(n))^β+2/β+1,∫_a^*+ζ^∞(F(z)-q)[(z)<q]dz/∫_a^*+ζ^∞ q(1-F(z))dz}
≤max{sup_F∈(0,q-(γζ)^β+1](q-F)[1/n(n,F)≥ q]/(1-q)F,
2/γζτ(1/β+1+1/√(e))(1/2√(n))^β+2/β+1,
sup_F∈[q+(γζ)^β+1,1)(F-q)[1/n(n,F) < q]/q(1-F)},
where the last inequality uses F(a^*-ζ)≤ q-(γζ)^β+1 and F(a^*+ζ)≥ q+(γζ)^β+1 from the definition of clustered distributions.
Next we analyze the maximum of the first and third terms in (<ref>). We derive
max{sup_F∈(0,q-(γζ)^β+1](q-F)[1/n(n,F)≥ q]/(1-q)F,
sup_F∈[q+(γζ)^β+1,1)(F-q)[1/n(n,F) < q]/q(1-F)}
≤max{sup_F∈(0,q-(γζ)^β+1]1-F/n(1-q)(q-F),sup_F∈[q+(γζ)^β+1,1)F/nq(F-q)}
≤max{1/n(1-q)(γζ)^β+1,1/nq(γζ)^β+1}
=1/n(γζ)^β+1min{q,1-q},
where the first inequality follows from Chebyshev's inequality.
Substituting this into (<ref>), we have
[L()]-L(a^*)/L(a^*)≤max{1/n(γζ)^β+1min{q,1-q},2/γζτ(1/β+1+1/√(e))(1/2√(n))^β+2/β+1}.
The proof for β=∞ is deferred to <Ref>, because it is simplifying an existing result from <cit.>.
§ ADDITIVE LOWER BOUND
We now lower-bound the additive regret of any (possibly randomized) data-driven algorithm for Newsvendor, showing it to be Ω(n^-β+2/2β+2) with probability at least 1/3. This implies that the expected additive regret is also Ω(n^-β+2/2β+2). The lower bound for multiplicative regret is similar, with the main challenge being to modify the distributions to satisfy F(a^*-ζ)≥τ,F(a^*+ζ)≤ 1-τ, so we defer it to <Ref>.
Fix q∈(0,1) and β∈[0,∞],γ∈(0,∞),ζ∈(0,(min{q,1-q})^1/β+1/γ].
Any learning algorithm based on n samples makes a decision with additive regret at least
1/8max{γ,1}(q(1-q)/3√(n))^β+2/β+1=Ω(n^-β+2/2β+2)
with probability at least 1/3 on some (β,γ,ζ)-clustered distribution that takes values in [0,1].
Let C=q(1-q)/3, H=1/max{γ,1}(C/√(n))^1/β+1. Consider two distributions P and Q, whose respective CDF functions F_P and F_Q are:
F_P(z) =
0, z∈(-∞, 0)
q+zC/H√(n), z∈[0,H)
1, z∈[H,∞);
F_Q(z) =
0, z∈(-∞, 0)
q+zC/H√(n)-C/√(n), z∈[0,H)
1, z∈[H,∞).
We let L_P(a) and L_Q(a) denote the respective expected loss functions under true distributions P and Q, and from the CDF functions, it can be observed that the respective optimal decisions are a_P^*=0 and a_Q^*=H. We now show that any learning algorithm based on n samples will incur an additive regret at least 1/8max{γ,1}(q(1-q)/3√(n))^β+2/β+1 with probability at least 1/3, on distribution P or Q.
Establishing validity of distributions.
First we show that both P and Q are (β,γ,ζ)-clustered distributions.
Note that the constraint ζ∈(0,(min{q,1-q})^1/β+1/γ] ensures that any z∈[a^*-ζ,a^*+ζ] with F(z)=0 or F(z)=1 would satisfy (<ref>); therefore it suffices to verify (<ref>) on z∈[0,H) for both P and Q. For distribution P, which has a^*=0, we have
|F_P(z)-q|
=zC/H√(n)
=z/H(max{γ,1} H)^β+1
=zmax{γ,1}^β+1H^β
>(γ z)^β+1
=(γ|z-0|)^β+1
for all z∈[0,H), where the second equality follows from C/√(n)=(max{γ,1} H)^β+1 and the inequality applies H>z. Therefore P is a (β,γ,ζ)-clustered distribution.
It can be verified by symmetry that Q is also a (β,γ,ζ)-clustered distribution.
In addition, it can be elementarily observed that
lim_z→ H^- F_P(z)=q+q(1-q)/3√(n) ≤ q+1-q/3=1-2/3 (1-q) < 1
F_Q(0)=q-q(1-q)/3√(n) ≥ q-q/3=2/3 q > 0
which ensures the monotonicity of the CDF's for P and Q.
Finally, it is easy to see that H≤1, and hence both distributions P and Q take values in [0,1].
Upper-bounding the probabilistic distance between P and Q.
We analyze the squared Hellinger distance between distributions P and Q, denoted as H^2(P,Q). Because P and Q only differ in terms of their point masses on 0 and H, standard formulas
for Hellinger distance yield
H^2(P,Q)
=1/2((√(q)-√(q-C/√(n)))^2+(√(1-q-C/√(n))-√(1-q))^2)
=1/2(q+q-C/√(n)-2√(q(q-C/√(n)))+1-q+1-q-C/√(n)-2√((1-q)(1-q-C/√(n))))
=1/2(-2C/√(n)+2q-2q√(1-C/q√(n))+2(1-q)-2(1-q)√(1-C/(1-q)√(n)))
≤1/2(-2C/√(n)+2q(C/2q√(n)+C^2/2q^2n)+2(1-q)(C/2(1-q)√(n)+C^2/2(1-q)^2n))
=1/2(C^2/qn+C^2/(1-q)n)
=C^2/2nq(1-q),
where the inequality follows from applying 1-√(1-x)≤x/2+x^2/2, ∀ x∈[0,1].
We note that we are substituting in x=C/q√(n) and x=C/(1-q)√(n), which are at most 1 because C=q(1-q)/3.
Let P^n denote the distribution for the n samples observed by the algorithm under distribution P, and let Q^n denote the corresponding distribution under Q. Let TV(P^n,Q^n) denote the total variation distance between P^n and Q^n. By a relationship[Some sources such as <cit.> use a tighter upper bound of √(2H^2(P^n,Q^n)(1-H^2(P^n,Q^n)/2)) on TV(P^n,Q^n) (noting that their definition of H^2(P,Q) also differs, by not having the coefficient "1/2" in (<ref>)). The weaker upper bound of √(2H^2(P^n,Q^n)) as used in <cit.> will suffice for our purposes.] between the total variation distance and Hellinger distance,
we have
TV(P^n,Q^n)≤√(2H^2(P^n,Q^n)),
which is at most √(2nH^2(P,Q)) according to the additivity of the Hellinger distance.
By applying (<ref>), we obtain
TV(P^n,Q^n)
≤C/√(q(1-q))
=√(q(1-q))/3 ≤1/3.
Lower-bounding the expected regret of any algorithm.
Fix any (randomized) algorithm for data-driven Newsvendor, and consider the sample paths of its execution on the distributions P and Q side-by-side. The sample paths can be coupled so that the algorithm makes the same decision for P and Q on an event E of measure 1-TV(P^n,Q^n)≥ 2/3, by definition of total variation distance.
Letting A_P,A_Q be the random variables for the decisions of the algorithm on distributions P,Q respectively,
we have that A_P and A_Q are identically distributed conditional on E.
Therefore, either [A_P≥H/2|E]=[A_Q≥H/2|E]≥1/2 or [A_P≤H/2|E]=[A_Q≤H/2|E]≥ 1/2.
First consider the case where [A_P≥H/2|E]=[A_Q≥H/2|E]≥1/2.
Note that if A_P≥H/2, then we can derive from (<ref>) that under the true distribution P,
L_P(A_P) - L_P(a^*_P)
≥∫_0^H/2(F_P(z)-q)dz
=∫_0^H/2 zC/H√(n) dz
=CH/8√(n)
=1/8max{γ,1}(q(1-q)/3√(n))^β+2/β+1.
Therefore, we would have
[L_P(A_P)-L_P(a^*_P)≥1/8max{γ,1}(q(1-q)/3√(n))^β+2/β+1]
≥[A_P≥H/2]
≥[A_P≥H/2|E][E]≥1/2·2/3=1/3.
Now consider the other case where [A_P≤H/2|E]=[A_Q≤H/2|E]≥ 1/2.
If A_Q≤H/2, then we can similarly derive from (<ref>) that under the true distribution Q,
L_Q(A_Q) - L_Q(a^*_Q)
≥∫_H/2^H(q-F_Q(z))dz
=∫_H/2^H(C/√(n)- zC/H√(n)) dz
=CH/8√(n)
=1/8max{γ,1}(q(1-q)/3√(n))^β+2/β+1.
The proof then finishes analogous to the first case.
abbrvnat
§ Β=∞ CASES OF MULTIPLICATIVE REGRET
§.§ High-probability Upper Bound
We prove <Ref> for β=∞.
To do so, we first analyze the case where ≤ a^*. We derive from (<ref>) that
L(a^*)
≥∫_^a^* (1-q)F(z) dz
≥(1-q)F()(a^*-)
≥(1-q)(q-sup_a≥0|(a)-F(a)|)(a^*-),
where the last inequality follows from (<ref>). By (<ref>) and the assumption that n>log(2/δ)/2(min{q,1-q})^2, we have q>√(log(2/δ)/2n)≥sup_a≥0|(a)-F(a)|.
This enables us to derive from (<ref>) that
L()-L(a^*)/L(a^*) =∫_^a^* (q-F(z)) dz/L(a^*)
≤(a^*-)(q-F())/(1-q)(q-sup_a≥0|(a)-F(a)|)(a^*-)
≤sup_a≥0|(a)-F(a)|/q(1-q)-(1-q)sup_a≥0|(a)-F(a)|,
where the second inequality applies (<ref>).
For the case where >a^*, by (<ref>) and properties of the Riemann integral, we have
L(a^*)
≥∫_a^*^ q(1-F(z)) dz
≥lim_a→^-q(1-F(a))(a-a^*)
≥ q(1-q-sup_a≥0|(a)-F(a)|)(-a^*),
where the last inequality follows from (<ref>).
By (<ref>) and the assumption that n>log(2/δ)/2(min{q,1-q})^2, we have 1-q>√(log(2/δ)/2n)≥sup_a≥0|(a)-F(a)|.
Therefore, we derive from (<ref>) that
L()-L(a^*)/L(a^*) =∫_a^*^ (F(z)-q) dz/L(a^*)
≤lim_a→^-(a-a^*)(F(a)-q)/q(1-q-sup_a≥0|(a)-F(a)|)(-a^*)
≤sup_a≥0|(a)-F(a)|/q(1-q)-qsup_a≥0|(a)-F(a)|,
where the first inequality is by properties of the Riemann integral, and the last inequality uses (<ref>).
Combining (<ref>) and (<ref>), we conclude that
L()-L(a^*)/L(a^*) ≤sup_a≥0|(a)-F(a)|/q(1-q)-max{q,1-q}sup_a≥0|(a)-F(a)|
=sup_a≥0|(a)-F(a)|/max{q,1-q}(min{q,1-q} - sup_a≥0|(a)-F(a)|)
≤2sup_a≥0|(a)-F(a)|/min{q,1-q} - sup_a≥0|(a)-F(a)|
holds for both cases. Applying (<ref>), we have that with probability at least 1-δ,
L()-L(a^*)/L(a^*)≤2√(log(2/δ)/2n)/min{q,1-q}-√(log(2/δ)/2n)
=2/min{q,1-q}√(2n/log(2/δ))-1.
§.§ Expectation Upper Bound
We see from (<ref>) and (<ref>) that
[L()]-L(a^*) =∫_0^a^*(q-F(z))[(z)≥ q]dz+∫_a^*^∞(F(z)-q)[(z)<q]dz
L(a^*) =∫_0^a^* (1-q)F(z) dz+∫_a^*^∞ q(1-F(z))dz.
Hence for any distribution with finite mean,
[L()]-L(a^*)/L(a^*) =∫_0^a^*(q-F(z))[(z)≥ q]dz+∫_a^*^∞(F(z)-q)[(z)<q]dz/∫_0^a^* (1-q)F(z) dz+∫_a^*^∞ q(1-F(z))dz
≤max{∫_0^a^*(q-F(z))[(z)≥ q]dz/∫_0^a^* (1-q)F(z) dz,∫_a^*^∞(F(z)-q)[(z)<q]dz/∫_a^*^∞ q(1-F(z))dz}
=max{sup_F∈(0,q)q-F/(1-q)F[1/n(n,F)≥ q],sup_F∈[q,1)F-q/q(1-F)[1/n(n,F) < q]}.
This completes the upper bound on sup_F:μ(F)<∞[L()]-L(a^*)/L(a^*).
Next we show that this bound is tight. By symmetry, we assume the maximum is achieved at some F∈(0,q). Consider a Bernoulli distribution which takes the value 0 with probability F. Then we know that a^*=1, and the CDF of this distribution is
F(z)=
0, z<0
F, z∈[0,1)
1, z≥1.
So for this Bernoulli distribution, we derive from (<ref>) and (<ref>) that
[L()]-L(a^*)
=∫_0^1 (q-F)[(z)≥ q]dz
=(q-F)[1/n(n,F)≥ q]
L(a^*)
=∫_0^1(1-q)F dz
=(1-q)F.
This implies
[L()]-L(a^*)/L(a^*)=q-F/(1-q)F[1/n(n,F)≥ q],
which shows that (<ref>) is tight and that the supremum in sup_F:μ(F)<∞[L()]-L(a^*)/L(a^*) can be achieved by Bernoulli distributions.
§ MULTIPLICATIVE LOWER BOUND
We now lower-bound the multiplicative regret of any data-driven algorithm, showing it to be Ω(n^-β+2/2β+2) with probability at least 1/3, which implies also a lower bound of Ω(n^-β+2/2β+2) on the expected multiplicative regret.
Fix q∈(0,1) and β∈[0,∞],γ∈(0,∞),ζ∈(0,(min{q,1-q})^1/β+1/γ),τ∈[0,min{q,1-q}-(γζ)^β+1].
Any learning algorithm based on n samples makes a decision with multiplicative regret at least
1/16γζτ+8q(1-q)((q-τ)(1-q-τ)/3√(n))^β+2/β+1
=Ω(n^-β+2/2β+2)
with probability at least 1/3 on some (β,γ,ζ)-clustered distribution satisfying F(a^*-ζ)≥τ and F(a^*+ζ)≤1-τ.
Let C=(q-τ)(1-q-τ)/3, H=1/γ(C/√(n))^1/β+1.
Consider two distributions P and Q, whose respective CDF functions F_P and F_Q are:
F_P(z) =
0, z∈(-∞, 0)
τ, z∈[0,2ζ)
q+C(z-2ζ)/H√(n), z∈[2ζ,2ζ+H)
1-τ, z∈[2ζ+H,4ζ+H)
1, z∈[4ζ+H,∞);
F_Q(z) =
0, z∈(-∞, 0)
τ, z∈[0,2ζ)
q+C(z-2ζ-H)/H√(n), z∈[2ζ,2ζ+H)
1-τ, z∈[2ζ+H,4ζ+H)
1, z∈[4ζ+H,∞).
We let L_P(a) and L_Q(a) denote the respective expected loss functions under true distributions P and Q, and from the CDF functions, it can be observed that the respective optimal decisions are a_P^*=2ζ and a_Q^*=2ζ+H. We now show that any learning algorithm with n samples will incur a multiplicative regret at least 1/16γζτ+8q(1-q)((q-τ)(1-q-τ)/3√(n))^β+2/β+1 with probability at least 1/3, on distribution P or Q.
Establishing validity of distributions.
First we show that both P and Q are (β,γ,ζ)-clustered distributions.
For distribution P, which has a^*=2ζ, it suffices to verify (<ref>) on z∈[ζ,3ζ]. We split the interval into segments [ζ,2ζ) and [2ζ,3ζ].
When z is in the first segment, F_P(z)=τ, so
|F_P(z)-q|=q-τ≥(γζ)^β+1≥(γ |z-2ζ|)^β+1,
where the first inequality follows from τ∈(0,min{q,1-q}-(γζ)^β+1], verifying (<ref>).
When z is in the second segment, for the case where ζ<H, it suffices to verify (<ref>) on z∈[2ζ,2ζ+H). We have
|F_P(z)-q|
=C(z-2ζ)/H√(n)
=γ^β+1H^β(z-2ζ)
>(γ|z-2ζ|)^β+1,
where the second equality applies C/√(n)=(γ H)^β+1 and the inequality follows from H>z-2ζ, verifying (<ref>).
On the other hand, for the case where ζ≥ H, it remains to verify (<ref>) on z∈[2ζ+H,3ζ]. We have F_P(z)=1-τ, so
|F_P(z)-q|=1-τ-q
≥(γζ)^β+1≥(γ|z-2ζ|)^β+1,
where the first inequality follows from τ∈(0,min{q,1-q}-(γζ)^β+1], again verifying (<ref>).
Therefore P is a (β,γ,ζ)-clustered distribution.
It can be verified by symmetry that Q is also a (β,γ,ζ)-clustered distribution.
In addition, because C=(q-τ)(1-q-τ)/3, we obtain using the fact τ<q<1-τ that
lim_z→ (2ζ+H)^-F_P(z) =q+C/√(n)=q+(q-τ)(1-q-τ)/3√(n)<q+1-q-τ/3<1-τ≤1
F_Q(2ζ) =q-C/√(n)=q-(q-τ)(1-q-τ)/3√(n)>q-q-τ/3>τ≥0
which ensures the monotonicity of the CDF's for P and Q.
Finally, we have
F_P(ζ)=τ≤ F_Q(ζ+H)
F_P(3ζ)≤1-τ=F_Q(3ζ+H),
which ensures F(a^*-ζ)≥τ and F(a^*+ζ)≤1-τ for both P and Q.
Upper-bounding the probabilistic distance between P and Q.
We analyze the squared Hellinger distance between distributions P and Q. Because P and Q only differ in terms of their point masses on 2ζ and 2ζ+H, standard formulas for Hellinger distance yield
H^2(P,Q)
=1/2((√(q-τ)-√(q-τ-C/√(n)))^2+(√(1-q-τ-C/√(n))-√(1-q-τ))^2)
=1/2(-2C/√(n)+2(q-τ)-2(q-τ)√(1-C/(q-τ)√(n))+2(1-q-τ)-2(1-q-τ)√(1-C/(1-q-τ)√(n)))
≤1/2(-2C/√(n)+2(q-τ)(C/2(q-τ)√(n)+C^2/2(q-τ)^2n)+2(1-q-τ)(C/2(1-q-τ)√(n)+C^2/2(1-q-τ)^2n))
=1/2(C^2/(q-τ)n+C^2/(1-q-τ)n),
where the inequality follows from 1-√(1-x)≤x/2+x^2/2, ∀ x∈[0,1].
We note that we are substituting in x=C/(q-τ)√(n) and x=C/(1-q-τ)√(n), which are at most 1 because C=(q-τ)(1-q-τ)/3.
Similar with the analysis in the proof of <Ref>, we have
TV(P^n,Q^n)
≤√(2H^2(P^n,Q^n))
≤√(2nH^2(P,Q))
≤ C√(1/q-τ+1/1-q-τ)
=√((q-τ)(1-q-τ)(1-2τ))/3
≤1/3.
Lower-bounding the expected regret of any algorithm.
Fix any (randomized) algorithm for data-driven Newsvendor, and consider the sample paths of its execution on the distributions P and Q side-by-side. The sample paths can be coupled so that the algorithm makes the same decision for P and Q on an event E of measure 1-TV(P^n,Q^n)≥ 2/3, by definition of total variation distance.
Letting A_P,A_Q be the random variables for the decisions of the algorithm on distributions P,Q respectively,
we have that A_P and A_Q are identically distributed conditional on E.
Therefore, either [A_P≥2ζ+H/2|E]=[A_Q≥2ζ+H/2|E]≥1/2 or [A_P≤2ζ+H/2|E]=[A_Q≤2ζ+H/2|E]≥ 1/2.
First consider the case where [A_P≥2ζ+H/2|E]=[A_Q≥2ζ+H/2|E]≥1/2. By (<ref>), we have
L_P(a_P^*)
=∫_0^2ζ (1-q)τ dz+∫_2ζ^2ζ+H q(1-q-C(z-2ζ)/H√(n))dz+∫_2ζ+H^4ζ+H qτ dz
=2ζτ+q(1-q)H-qCH/2√(n)
<2ζτ+q(1-q)/γ,
where the inequality follows from H=1/γ(C/√(n))^1/β+1≤1/γ and qCH/2√(n)>0.
Note that if A_P≥2ζ+H/2, then we can derive from (<ref>) that under the true distribution P,
L_P(A_P) - L_P(a^*_P)
≥∫_2ζ^2ζ+H/2(F_P(z)-q)dz
=∫_2ζ^2ζ+H/2C(z-2ζ)/H√(n)dz
=CH/8√(n)
=1/8γ((q-τ)(1-q-τ)/3√(n))^β+2/β+1.
Therefore, we would have
L_P(A_P)-L_P(a^*_P)/L_P(a^*_P)
>1/16γζτ+8q(1-q)((q-τ)(1-q-τ)/3√(n))^β+2/β+1
with probability at least
[A_P≥2ζ+H/2]
≥[A_P≥2ζ+H/2|E][E]
≥1/2·2/3
=1/3.
Now consider the other case where [A_P≤2ζ+H/2|E]=[A_Q≤2ζ+H/2|E]≥ 1/2.
By (<ref>), we have
L_Q(a_Q^*)
=∫_0^2ζ(1-q)τ dz+∫_2ζ^2ζ+H(1-q)(q+C(z-2ζ-H)/H√(n))dz+∫_2ζ+H^4ζ+H qτ dz
=2ζτ+q(1-q)H-(1-q)CH/2√(n)
<2ζτ+q(1-q)/γ
where the inequality follows from H≤1/γ and (1-q)CH/2√(n)>0.
If A_Q≤2ζ+H/2, then we can derive from (<ref>) that under the true distribution Q,
L_Q(A_Q) - L_Q(a^*_Q)
≥∫_2ζ+H/2^2ζ+H(q-F_Q(z))dz
=∫_2ζ+H/2^2ζ+HC(2ζ+H-z)/H√(n) dz
=CH/8√(n)
=1/8γ((q-τ)(1-q-τ)/3√(n))^β+2/β+1.
The proof then finishes analogous to the first case.
§ CONTINUOUS LOWER BOUND FOR Β=0
We provide a self-contained lower bound of Ω(1/n) using two continuous distributions, which contrasts other lower bounds (see <cit.>) that use a continuum of continuous distributions.
Our two distributions are obtained by modifying those from <Ref> (which had point masses on 0 and H) to be continuous in the case where β=0.
Fix q∈(0,1) and γ∈(0,∞).
Any learning algorithm based on n samples makes a decision with additive regret at least
q^2(1-q)^2/72max{γ,1}n=Ω(1/n)
with probability at least 1/3 on some continuous distribution with density at least γ over an interval in [0,1].
Let C=q(1-q)/3, H=C/max{γ,1}√(n).
Fix η∈(0,min{min{q,1-q}-C/√(n)/γ,1/3q}].
Consider two distributions P and Q, whose respective CDF functions F_P and F_Q are:
F_P(z) =
0, z∈(-∞, 0)
q/ηz, z∈[0,η)
q+C/H√(n)(z-η), z∈[η,H+η)
q+C/√(n)+1-q-C/√(n)/η(z-H-η), z∈[H+η,H+2η)
1, z∈[H+2η,∞);
F_Q(z) =
0, z∈(-∞, 0)
q-C/√(n)/ηz, z∈[0,η)
q-C/√(n)+C/H√(n)(z-η), z∈[η,H+η)
q+1-q/η(z-H-η), z∈[H+η,H+2η)
1, z∈[H+2η,∞).
From the CDF functions, it can be observed that F_P(z), F_Q(z) are continuous, and that the respective optimal decisions are a_P^*=η and a_Q^*=H+η. We now show that any learning algorithm based on n samples will incur an additive regret at least q^2(1-q)^2/72max{γ,1}n with probability at least 1/3, on distribution P or Q.
It is easy to see that P and Q are continuous distributions with a positive density over interval [0,H+2η], where H+2η≤ C+ 2/3q ≤ 1. To see that this density is at least γ, we need to check that all of the slopes
q/η, C/H√(n), 1-q-C/√(n)/η, q-C/√(n)/η, 1-q/η
are at least γ. We first derive that C/H√(n)=max{γ,1}≥γ. We next derive that
q/η≥q-C/√(n)/η≥q-C/√(n)/(min{q,1-q}-C/√(n)/γ)≥γ.
Similarly, we derive that
1-q/η≥1-q-C/√(n)/η≥1-q-C/√(n)/(min{q,1-q}-C/√(n)/γ)≥γ
which completes the verification that P and Q have density at least γ over an interval in [0,1].
We next analyze the squared Hellinger distance between P and Q. Because the PDF's of P and Q only differ on [0,η) and [H+η,H+2η), standard formulas for Hellinger distance yield
H^2(P,Q)
=1/2∫_0^η(√(q/η)-√(q-C/√(n)/η))^2dz+1/2∫_H+η^H+2η(√(1-q/η)-√(1-q-C/√(n)/η))^2dz
=1/2(q+q-C/√(n)-2√(q(q-C/√(n)))+1-q+1-q-C/√(n)-2√((1-q)(1-q-C/√(n)))).
Note that this is the same as (<ref>). Therefore, following the analysis in the proof of <Ref>, we conclude that TV(P^n,Q^n)≤1/3.
Fix any (randomized) algorithm for data-driven Newsvendor, and consider the sample paths of its execution on the distributions P and Q side-by-side. The sample paths can be coupled so that the algorithm makes the same decision for P and Q on an event E of measure 1-TV(P^n,Q^n)≥ 2/3, by definition of total variation distance.
Letting A_P,A_Q be the random variables for the decisions of the algorithm on distributions P,Q respectively,
we have that A_P and A_Q are identically distributed conditional on E.
Therefore, either [A_P≥H/2+η|E]=[A_Q≥H/2+η|E]≥1/2 or [A_P≤H/2+η|E]=[A_Q≤H/2+η|E]≥ 1/2.
First consider the case where [A_P≥H/2+η|E]=[A_Q≥H/2+η|E]≥1/2.
Note that if A_P≥H/2+η, then we can derive from (<ref>) that under the true distribution P,
L_P(A_P) - L_P(a^*_P)
≥∫_η^H/2+ηC/H√(n)(z-η) dz
=C^2/8max{γ,1}n
=q^2(1-q)^2/72max{γ,1}n
Therefore, we would have
[L_P(A_P)-L_P(a^*_P)≥q^2(1-q)^2/72max{γ,1}n]
≥[A_P≥H/2+η]
≥[A_P≥H/2+η|E][E]≥1/2·2/3=1/3.
Now consider the other case where [A_P≤H/2+η|E]=[A_Q≤H/2+η|E]≥ 1/2.
If A_Q≤H/2+η, then we can similarly derive from (<ref>) that under the true distribution Q,
L_Q(A_Q) - L_Q(a^*_Q)
≥∫_H/2+η^H+η(C/√(n)- C/H√(n)(z-η)) dz
=C^2/8max{γ,1}n
=q^2(1-q)^2/72max{γ,1}n
The proof then finishes analogous to the first case.
|
http://arxiv.org/abs/2409.02648v1 | 20240904122619 | Creating a Microstructure Latent Space with Rich Material Information for Multiphase Alloy Design | [
"Xudong Ma",
"Yuqi Zhang",
"Chenchong Wang",
"Ming Wang",
"Mingxin Huang",
"Wei Xu"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cs.CV"
] |
SoK: Bitcoin Layer Two (L2)
Minfeng Qi^1,These authors contributed equally to the work. , Qin Wang^2,*, Zhipeng Wang^3, Manvir Schneider^4,
Tianqing Zhu^1, Shiping Chen^2, William Knottenbelt^3, Thomas Hardjono^5
^1City University of Macau, China |
^2CSIRO Data61, Australia |
^3Imperial College London, UK
^4Cardano Foundation, Switzerland |
^5Massachusetts Institute of Technology, US
============================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The intricate microstructure serves as the cornerstone for the
composition/processing-structure-property (CPSP) connection in multiphase alloys. Traditional alloy design methods often overlook microstructural details, which diminishes the reliability and effectiveness of the outcomes. This study introduces an improved alloy design algorithm that integrates authentic microstructural information to establish precise CPSP relationships. The approach utilizes a deep-learning framework based on a variational autoencoder to map real microstructural data to a latent space, enabling the prediction of composition, processing steps, and material properties from the latent space vector. By integrating this deep learning model with a specific sampling strategy in the latent space, a novel, microstructure-centered algorithm for multiphase alloy design is developed. This algorithm is demonstrated through the design of a unified dual-phase steel, and the results are assessed at three performance levels. Moreover, an exploration into the latent vector space of the model highlights its seamless interpolation ability and its rich material information content. Notably, the current configuration of the latent space is particularly advantageous for alloy design, offering an exhaustive representation of microstructure, composition, processing, and property variations essential for multiphase alloys.
Keywords: Microstructure; Latent space; Alloy design; Variational autoencoder.
Graphical abstract:
< g r a p h i c s >
§ INTRODUCTION
In the realm of alloy design, the paramount goal is to engineer novel materials suitable for targeted applications. Central to this is the elucidation of composition/processing–structure–property (CPSP) relationships. These relationships, traditionally derived from experimental data, such as the Hall–Petch relationship linking grain size to yield strength <cit.>, or the influence of finely dispersed second phases on strength augmentation <cit.>, underscore the critical role of microstructure. The complexity and heterogeneity of multiphase alloys, however, necessitate a refined understanding of these microstructural features. In such cases, conventional CPSP relationships often fall short, as they may not adequately reflect the nuanced and often nonlinear interactions within the microstructure of these alloys.
In contrast to conventional methods, machine learning (ML) techniques can be used to model complex and nonlinear relationships, which is fundamental for their applications in alloy design <cit.>. Previous studies <cit.> in this area have demonstrated that ML models that use the composition and process parameters of alloys as model inputs, can predict mechanical properties. These models were then combined with heuristic optimization algorithms to search for optimal solutions and develop novel alloys. Although these ML-based methods are fast and relatively simple <cit.>, they overlook the key role of the microstructure. To address this limitation, the incorporation of microstructural information is crucial. Some microstructural metrics, such as the volume fraction of a certain phase <cit.>, can be used as additional input to the ML model to improve the prediction accuracy of the mechanical properties. However, relying solely on simple metrics might be insufficient for alloy systems with complex microstructures.
Complex microstructures can often be represented as images using scanning electron microscopy. Hence, the use of deep learning methods to process microstructural images and extract key features is important for alloy design. Generative models, such as the generative adversarial network (GAN) <cit.> and variational autoencoder (VAE) <cit.>, offer new solutions for alloy design <cit.>. These models typically perform unsupervised learning on training images to output low-dimensional representations and link them to the compositions, processes, or properties relevant to alloy design. For instance, Cao et al. <cit.> applied a conditional GAN to extract the microstructural features of Ti–6Al–4V and established a relationship between the process and microstructure. However, their model required process parameters and random vectors as inputs to generate the microstructure, making it impossible to achieve an inverse processing design based on the microstructure. In addition, GAN models are associated with challenges during training, including pattern collapse <cit.>. Therefore, building a generative model with a stable training process and determining suitable input and output features are critical aspects of alloy design. Kusampudi et al. <cit.> applied the VAE to extract descriptors from the microstructure of synthetic dual-phase (DP) steels, built relationships between the descriptors and properties, and used Bayesian optimization to determine the best combination of descriptors. Similarly, Kim et al. <cit.> employed the VAE and Gaussian process regression to determine the optimal microstructure. These methods use microstructural images as inputs to avoid subjectivity when selecting the microstructural features. However, the relationship between the composition/process and the microstructure is missing in these approaches, as they were developed using synthetic microstructures. Consequently, designed microstructures with ideal properties may not be realized experimentally.
A unified DP (UniDP) steel is a type of unified steel <cit.> that aims to meet different performance requirements with a single-alloy composition and different process parameters. The use of this type of steel can significantly simplify the systematic issues encountered in automotive steel production and recycling. The microstructures of UniDP steels are mainly ferrite and martensite, and an appropriate balance between them is crucial for the success of the UniDP steel design. However, the complexity and diversity of microstructural morphology pose challenges in this regard. Hence, a microstructure-centered UniDP steel design is required.
In this study, we proposed a novel deep-learning algorithm specifically tailored for the design of multiphase alloys, emphasizing the pivotal role of microstructural characteristics in the alloy design process. Through VAE, we transformed microstructural images of dual-phase steels into a compact representation within a latent space, capturing the complex microstructural features of alloys. This latent space is then correlated with the alloy's composition, processing parameters, and mechanical properties to establish complete CPSP relationships. To emphasize the usefulness of the algorithm in the field of the design of multiphase alloy, we proposed a microstructure-centered design method for UniDP steels based on the principles of physical metallurgy (PM) and a specific sampling strategy within the latent space. Distinct from previous methods, it leverages experimental microstructures and complete CPSP connections, accelerating the design of UniDP steels. In addition, a visualization of the latent space demonstrated that integrating authentic microstructural details with precise CPSP linkages results in a continuously interpolated and information-rich mapping space. This space provides a robust foundation for the effective design and discovery of novel multiphase alloys, emphasizing the pivotal role of microstructural features in advancing the frontier of alloy design.
§ METHODOLOGY
§.§ Microstructure-centered alloy design framework
Our framework for alloy design is fundamentally rooted in the microstructure and reveals three distinct phases, as shown in Fig. <ref>. First, we obtained microstructural images from prior studies <cit.>. These images were subjected to binarization and data augmentation to enhance their quality and variability, providing the basis for accurate modeling. The core of our framework is a deep-learning architecture designed to establish robust CPSP connections. This architecture comprises a VAE network <cit.> with an encoder and a decoder supplemented by dual multilayer perceptrons (MLPs). Together they work to predict the composition, processing parameters, and properties of the alloy directly from an initial microstructural image. The final phase involved applying the CPSP relationships derived from our modeling for alloy design. We explored the latent vector space created by our deep learning model to identify promising alloy candidates. Our selection process was guided by the foundational principles of PM, which ensured that our choices were grounded in scientific reasoning. To validate our approach, experiments were conducted to test the properties of the designed UniDP steels.
In the initial phase, the creation of a dataset was crucial. Given the high costs associated with alloy production and testing, creating a dataset that accurately mirrors the structural and performance characteristics of DP steels is challenging. To overcome this limitation, we collected data on DP steels from highly cited papers <cit.>, including the chemical composition, heat treatment parameters, scanning electron microscope (SEM) images, and mechanical properties. We collected and preprocessed 22 samples (Supplementary Table <ref> and Supplementary Fig. <ref>). The heat treatment processes were divided into three distinct groups for ease of prediction. The microstructural images were converted into a binary format (with black representing ferrite and white representing martensite) based on the martensite volume fraction (MVF) reported in literature. Then, these images were normalized, cropped, and expanded to facilitate the incorporation of microstructural data into the model training phase and prevent model overfitting. Details of image preprocessing are shown in Section 2.2.1.
The second phase involved the development of a deep learning model. To overcome the lack of generalization in microstructural feature selection across various alloy systems, we devised a VAE-centric deep learning model (VAE–DLM) to establish a robust and comprehensive CPSP linkage. The key innovation of the model is the ability of the VAE to autonomously extract authentic microstructural features through iterative optimization, forming a latent space <cit.>. Meanwhile, the two MLPs, namely the MCP and MP models, bolster the representational capacity of the VAE by imposing regularization constraints derived from the composition, process, and property information of the material. The VAE–DLM was then trained using the DP steel dataset constructed in the initial phase.
In the final stage, the design process leveraged the CPSP relationships established in the second phase by probing the latent vector space of the VAE–DLM. This exploration yielded an array of latent vectors, each of which could be utilized by the VAE–DLM to predict the corresponding compositions, processes, properties, and microstructural images. The fundamental principles of PM provided guidance for the identification of potential UniDP steel candidates. Experiments were conducted to validate the design outcomes and confirm the feasibility of the microstructure-centered alloy design approach.
§.§ Details of the design framework
§.§.§ Preprocessing of microstructural images of literature data
The raw image data were composed of microstructural images of 22 steels with different chemical compositions. Initially, we cropped the images to eliminate extraneous parts (e.g., markers) and binarized them based on the MVF reported in literature. Then, each image was divided into four equally sized sub-images and all the data were partitioned into four groups, with each group containing one sub-image from each original image. To mitigate the overfitting problem, we applied an offline data augmentation technique: we randomly sampled each sub-image eight times at a pixel size of 224 × 224. Ultimately, we obtained 176 sub-images for each group, totaling 704 sub-images.
§.§.§ Ultimate tensile strength and uniform elongation criteria of UniDP
In general, the commercial performance standards for DP780, DP980, and DP1180 require ultimate tensile strength (UTS) values exceeding 780, 980, and 1180 MPa, respectively, along with corresponding total elongation (TEL) values exceeding 12%, 8%, and 5% <cit.>. However, the tensile plate specimen size used in the commercial performance standard (No. 5, JIS Z 2241 standard) differed from the plate specimen size used in this study, which are A25 (25 mm in length and 6 mm in width). Additionally, since we used UTS and uniform elongation (UE) for evaluation during the design process of UniDP steel, the commercial performance standard lacked information about UE. Therefore, we needed to perform a standard conversion to obtain the TEL standard under A25 conditions. Then, we collected complete data for commercial DP780, DP980, and DP1180, adjusted their TEL values to equivalent values under A25 conditions, and calculated the adjusted TEL values and corresponding UE ratios. We averaged the ratios for the three steel grades. Finally, we divided the TEL criteria under A25 conditions by this average to determine the final UE criteria.
The TEL criteria for DP780, DP980, and DP1180 were 11.90%, 7.93%, and 4.96%, respectively, under A25 conditions (calculated using equations from the literature <cit.>). The ratios of the adjusted TEL values (computed using the same formulas <cit.>) to the corresponding UE values for the collected commercial DP780, DP980, and DP1180 were 1.63, 1.91, and 1.42, respectively, resulting in an average ratio of 1.65. Finally, the UE criteria for DP780, DP980, and DP1180 were 7.21%, 4.81%, and 3.01%, respectively.
§.§.§ Evaluation metrics for the prediction
We used two metrics to assess the predictive ability of the model: squared correlation coefficient (R^2) and mean absolute error (MAE). These metrics can be calculated as follows:
R^2 = ( n∑_n^i=1 f (x_i ) y_i- ∑_n^i=1 f (x_i ) ∑_n^i=1 y_i)^2/ ( n∑_n^i=1 f (x_i )^2- ( ∑_n^i=1 f (x_i ) )^2 ) ( n∑_n^i=1 y_i^2- ( ∑_n^i=1 y_i )^2 )
MAE = 1/n∑_n^i=1 | f (x_i ) - y_i |
where n is the number of samples and f (x_i ) and y_i represent the predicted and experimental values of the i_th sample, respectively.
Given the limited data, we employed 4-fold cross-validation to evaluate the predictive performance of the model, which entailed using one group for testing and the remainder for training; this process was repeated four times, each with a different test group.
§.§ Experimental validation
§.§.§ Experimental validation of designed steel
The alloy with the designed chemical composition was smelted and forged into a steel ingot weighing approximately 50 kg, hot-rolled into slabs with a thickness of 3 mm, and then cooled in a furnace. Following the removal of the oxide layer by pickling, the slabs were cold-rolled to produce sheets with a thickness of 1.4 mm. The sheets were then heated to 900 ^∘ C for a duration of 5 min and quenched. Intercritical annealing regimes with varying temperatures and durations were subsequently employed to achieve the desired properties and microstructures (see Supplementary Table <ref>). Finally, the specimens were subjected to SEM characterization and performance testing using a JSM-7800F field-emission scanning electron microscope and a tensile testing machine, respectively. The tensile specimens, fabricated in alignment with the rolling direction of the plates, were prepared in accordance with the ASTM E8 standard with a gauge length of 25 mm and a width of 6 mm, termed A25.
§.§.§ Preprocessing of microstructural images from experimental validation
Each heat treatment process yielded 4 SEM images, totaling 24. We selected four experimental images, obtained at an annealing temperature of 815 ^∘ C and an annealing time of 3 min, as the validation results for DP1180. In addition, experimental images obtained at annealing temperatures of 765 ^∘ C and 715 ^∘ C, and an annealing time of 13 min, were chosen as the validation results for DP980 and DP780, respectively.
To facilitate comparison with the design results, the microstructural images of the experimental steels required preprocessing. Initially, the experimental images were binarized using the ImageJ software <cit.>. Based on the average scale of all the images in the dataset, the experimental images with a resolution of 1024×768 pixels were resized to 833×720 pixels to ensure the scale of experimental images was close to that of literature-collected images. Subsequently, all the experimental images were sequentially sampled with a sampling size of 224×224 pixels, resulting in 108 sub-images (36 sub-images each for DP780, DP980, and DP1180). Upon feeding these images into the VAE model, the corresponding latent vectors and generated images were computed.
§ RESULTS AND DISCUSSION
§.§ Construction of CPSP relationship based on VAE–DLM
The fundamental principle of the deep learning model applied to build the CPSP relationship, termed VAE–DLM (Fig. <ref>a), is to condense microstructural images into a low-dimensional representation in the latent space, which is then used for information fusion and alloy design. Specifically, the model integrates three key elements: a VAE and two MLP models.
The VAE, which is a generative model renowned for its latent representation learning, comprises an encoder and a decoder. The encoder employs a modified ResNet-18 <cit.> architecture, tailored to process grayscale images through a single-channel first layer. It maps the input data into a latent space, generating a mean and standard deviation that define a probability distribution. The decoder, which is a combination of transposed convolution, batch normalization, activation, and convolution layers, reconstructs the original input by sampling from the probability distribution of the encoder. The transposed convolutional layers of the decoder utilize a kernel size of 3×3, a stride of 2×2, and the same padding to preserve the spatial dimensions of the input. Both the MCP and MP models were composed of fully connected, batch normalized, activation and dropout layers. Each hidden layer has 512 neurons. Although the inputs of both models are vectors in the latent space, their outputs were distinctly different. The MCP model predicts the parameters related to the material composition and processing: carbon (C), manganese (Mn), intercritical annealing temperature, annealing time, and process type. In contrast, the MP model focuses on predicting the material properties: the UTS, UE, and MVF.
The predictions of VAE–DLM are shown in Fig. <ref>b-d. R^2 for most of the test set outputs surpassed 80%, suggesting that the VAE–DLM had robust generalization capabilities and a high predictive accuracy. Considering that the VAE–DLM will be subsequently used for the alloy design of DP steels and that predicting the mechanical properties is a crucial step in performance-oriented alloy design, we further examined the prediction effects of UTS and UE, as shown in Fig. <ref>c, d. For the training set, nearly all the results aligned precisely on the line with a slope of 1 and showed small error bars. For the test set, most of the points were near the diagonal, although a few points in the high-UE region deviated. The MAEs for the UTS and UE were only 33.9 MPa and 1.35%, respectively, suggesting that the models adequately learned the relationship between the microstructure and performance.
In addition, we also compared our methodology with two different series of ML models: the first series of ML models takes the composition and process as inputs <cit.>, whereas the second takes composition, process, and MVF <cit.>. The comparative analysis demonstrated that our approach, which uses microstructural images as input, outperforms these models in accurately predicting the material properties (Supplementary Figs. <ref> and <ref>). This highlights the importance of incorporating actual microstructural data and establishing complete CPSP relationships to enhance the prediction accuracy of the mechanical properties.
§.§ UniDP steel design based on probability distribution sampling
The CPSP relationship, as clarified by the VAE–DLM, offers novel perspectives on the alloy composition and process design of UniDP steels. By sampling the latent space, we can derive the composition, process parameters, and properties of the new DP alloys. In the context of the VAE model, the feature vector 𝑍 in the latent space is contingent upon two important variables: the mean (μ) and the standard deviation (σ):
Z = μ + σ×ε
Where ε adheres to a standard normal distribution during model training, denoted by ε ∼ 𝒩(0, 1). The 704 sub-images corresponded to the 704 𝑍 vectors. To investigate the impact of varying sampling ranges on the sampling results (refer to Supplementary Fig. <ref>), the standard deviation of ε was incrementally increased from 1 to 15. To ensure the validity of the results, each range was independently sampled 14,080 times, equating to 20 samples for each 𝑍 vector. The findings indicate that a higher standard deviation yields more design outcomes that satisfy the DP980 performance criteria (UE and UTS) but also leads to an increase in the unreasonable design results (i.e., negative values of the composition, process, or properties, or predicted martensite volume fractions exceeding 100%). Considering that a standard deviation of 10 for ε preserves an accuracy of over 90% (reasonable sample proportion) and covers a wide range of properties (as depicted in Fig. <ref>a and Supplementary Fig. <ref>), we chose this sampling result for this study and excluded any new unreasonable alloys.
In Fig. <ref>a, the design alloys that meet the performance requirements of DP780, DP980, and DP1180, hereinafter referred to as the initially screened alloys, are denoted by blue boxes. These performance criteria require UTS values exceeding 780, 980, and 1180 MPa, and corresponding UE values surpassing 7.21%, 4.81%, and 3.01%, respectively. Figure <ref>b shows the compositional distribution of the initially screened alloys. The compositions of these alloys are primarily concentrated around C = 0.09 wt.% and Mn = 2.0 wt.%, suggesting this composition as a reference for the composition of our UniDP steel. It is widely recognized that DP steels typically belong to the C–Mn–silicon (Si) system (referring to the GB/T 20564.2-2017 standard). Hence, based on the distribution characteristics of Si in the training dataset collected from literature, we adopted a mode value of Si = 0.42 wt.%. For the heat treatment, we opted for quenching and intercritical annealing. This decision was informed by its frequent occurrence within the dataset and relative cost-effectiveness. The initially screened alloys (called secondary-screened alloys) that did not meet the above criteria for composition and process type were excluded. The initially screened alloys that meet the criteria (called candidate alloys) are used to determine the annealing temperature range for UniDP steels (refer to Supplementary Table <ref>). The annealing time was not designed because of the limited diversity within the training dataset.
Figure <ref>c presents the experimental mechanical properties of the designed UniDP steels. All the designed UniDP steels had the same chemical composition with C = 0.09 wt.%, Mn = 2.0 wt.%, and Si = 0.42 wt.%. Quenching and intercritical annealing were performed. The properties of DP780 and DP980 were attained at an annealing time of 13 min and annealing temperatures of 715 ^∘ C and 765 ^∘ C, respectively. The properties of DP1180 were attained at an annealing temperature of 815 ^∘ C and annealing times of 3, 6, 9, and 13 min. An alteration in the strength level corresponds to microstructural transformation, particularly in the MVF (refer to Supplementary Figs. <ref> and <ref>). Figure <ref>d shows a cost comparison of the elemental constituents of the experimental steels and commercial steels. Because of the reduced Mn content, the designed UniDP steels exhibited lower elemental costs than commercial steels. Simultaneously, the singular heat treatment process flow of the experimental steels can help reduce the production expenses of enterprises. This highlights the significant potential inherent in current methodologies for designing new high-quality alloys.
§.§ Comparison between candidate and experimental alloys
To assess the rationality of our alloy design process, we compared the latent vectors, microstructural images generated, and mechanical properties of the experimental alloys and candidate alloys. The experimental images were preprocessed (refer to Section 2.3.2) and forwarded into the encoder. The Manhattan distances were then calculated between the latent vectors of the experimental alloys and those of the candidate/secondary-screened alloys. We used the mean value, denoted by μ, of the encoder output as the latent vector for alloys. As depicted in Fig. <ref>a, compared with the secondary-screened alloys, the average distances between the vectors of the candidate alloys and those of the experimental alloys were generally smaller. This suggests that the experimental alloys were more similar to the candidate alloys. Subsequently, we compared the generated images and properties of the candidate and experimental alloys with the smallest vector distances. As shown in Fig. <ref>b–d, the generated images of both the sets of alloys exhibit numerous morphological similarities and comparable properties, with the average relative errors for the UTS and UE being 5% and 19%, respectively. The experimental UniDP alloys resembled the candidate alloys closely, thereby providing a robust reference for UniDP steel design. This study integrated the actual microstructure, composition, and processing into the design, thereby effectively excluding unreasonable regions within the composition and processing space.
§.§ Exploratory data analysis of latent space
The latent vector space is formed by the probability distributions associated with all the images in the dataset. This space provided a qualitative framework for investigating the relationship between the attributes of alloys. The logic behind its construction directly influences the design results of the compositions, processes, and microstructures. Hence, it is necessary to visualize the latent space to reveal hidden correlations between attributes. In the following, we describe our methodologies and findings from the visualization.
First, we utilized the mean, denoted by μ, of the encoder output as the sampling result to mitigate randomness. The unsupervised t-SNE method <cit.> is then used to project the sampling vectors of the dataset images into a 2D space for visualization. Figure <ref>a shows the distribution of the microstructures following the t-SNE dimensionality reduction, with distinct morphologies being clearly delineated. From top left to bottom right, the microstructural images depict a progression. Initially, they present a combination of fibrous martensite and ferrite, followed by a mixture of blocky martensite and ferrite, and ultimately, they exhibit a structure composed mainly of martensite. These distinctly separated clusters suggest a robust correlation between the points in the t-SNE space and the microstructural images. This correlation was established within the constraints of the MP and MCP models, which encompass composition, process, and property information.
The t-SNE space illustrates the distribution of the Mn content, intercritical annealing temperature, and mechanical properties. As shown in Fig. <ref>b, c, most Mn elements and annealing temperature intervals showed a significant enrichment. The potential link between the two factors clearly demonstrates the characteristics of the dataset. However, the link between a single factor and the microstructure is more ambiguous. This could be due to the interplay of other factors, such as the annealing time and C content, highlighting the intricate relationships between the composition/process and microstructure and the challenges associated with adjusting the microstructure of DP steels using the conventional trial-and-error method. The UTS and UE distributions of the DP steels is shown in Fig. <ref>d, e. The clusters demonstrated a distinct pattern, with the UTS and UE progressively increasing and decreasing, respectively, from the upper left to the lower right. This pattern aligns with the previously mentioned microstructural variations and is in accordance with material science principles, indicating that the model could effectively comprehend the microstructural and property characteristics of DP steels.
§.§ Continuous latent space
A continuous interpolation of the latent vector space is crucial to material design frameworks based on the VAE–DLM. This is primarily because of its ability to generate a wide array of plausible new data through interpolation or out-of-domain sampling within a certain range. Hence, it is essential to investigate the continuity of latent vector spaces. We selected a sample image and fed it into the encoder to obtain the mean (denoted by μ) and standard deviation (denoted by σ). Subsequently, we defined 𝑍 = μ + τσ and gradually increased the value of τ. Finally, the latent vector 𝑍 was fed into the decoder, MP, and MCP models to attain the generated image, UTS, and MVF, respectively.
Figure <ref> shows the alterations in the generated images, UTS, and MVF. The martensite phase (represented in white) within the generated image progressively expands toward the interior of the ferrite phase (depicted in black), transitioning from a fibrous to a blocky structure, and eventually coalescing into a structure predominantly composed of martensite. The corresponding MVF and UTS values increased incrementally. However, the growth rate of the MVF decelerated, while the growth rate of the UTS remained relatively constant, which might be attributed to an increase in the elemental content. Consequently, in the alloy design process, the design results corresponding to larger τ values might not satisfy the actual requirements when considering the issue of elemental cost. This further illustrates the significance of an appropriate sampling range for the successful design of UniDP steel. In conclusion, the latent vector space is well constructed, continuously interpolated, and can comprehensively consider the DP steel composition, process, properties, and microstructural information, thereby contributing to the UniDP steel design.
§ CONCLUSION
The design of alloys, particularly for multiphase alloys, presents formidable challenges in establishing accurate and complete CPSP links. To address this issue, we introduced a specialized deep learning model, namely the VAE–DLM, specifically for multiphase alloy systems and developed a novel alloy design framework centered on the microstructure. The framework merges advanced deep learning techniques with PM knowledge to create precise and robust CPSP connections from limited datasets. Moreover, this framework facilitates the rapid development of multiphase alloys using specific sampling strategies and PM knowledge.
The effectiveness of this design approach was demonstrated by the development of a new UniDP steel. We compared the experimental results of the UniDP steel with its design results in terms of latent vectors, generated microstructures, and performance, justifying the design results. Moreover, through visualization analysis, we founded that the latent space generated by the VAE–DLM is both continuous and richly informative, which can aid in comprehending the intricate relationships between the material parameters and in providing robust guidance for the design of multiphase alloys.
This study represents a breakthrough in multiphase alloy design, stressing the critical incorporation of actual microstructural details into the CPSP framework to achieve high-fidelity correlations. In the future, we plan to extend our dataset and refine deep learning model to address more complex compositions of multiphase alloys, including quenching and partitioning steels. Moreover, ongoing research into the interpretative aspects of the latent space will improve the credibility of the framework in the design of multiphase alloys.
§.§ CRediT authorship contribution statement
Xudong Ma: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Validation, Writing – original draft, Writing – review and editing. Yuqi Zhang: Conceptualization, Formal analysis, Investigation, Methodology, Software, Writing – review & editing. Chenchong Wang: Conceptualization, Formal analysis, Project administration, Supervision, Validation, Visualization. Ming Wang: Formal analysis, Resources, Supervision, Writing – review & editing. Mingxin Huang: Formal analysis, Funding acquisition, Project administration, Visualization. Wei Xu: Formal analysis, Funding acquisition, Project administration, Supervision.
§.§ Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§.§ Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
§.§ Code availability
The codes are available from the corresponding author upon reasonable request.
§.§ Acknowledgements
The research was supported by the National Key Research and Development Program of China (No. 2022YFB3707501), the National Natural Science Foundation of China (No. U22A20106 and No. 52304392). M.X. Huang acknowledged the support from Mainland-Hong Kong Joint Funding Scheme, Platform (MHP/064/20).
§.§ Appendix. Supplementary materials
See supplementary materials for more details.
10
RN282
E.O. Hall,
The Deformation and Ageing of Mild Steel: III Discussion of Results,
Proc. Phys. Soc. London, Sect. B 64 (1951) 747. 10.1088/0370-1301/64/9/303.
RN406
N.J. Petch,
The cleavage strength of polycrystals,
J. Iron Steel Inst. 174 (1953) 25-28.
RN288
Z. LU, S. JIANG, J. HE, J. ZHOU, W. SONG, Y. WU, H. WANG, X. LIU,
Second phase strengthening in advanced metal materials,
Acta Metall. Sin. 52 (2016) 1183-1198.
RN310
X. Geng, F. Wang, H.H. Wu, S. Wang, G. Wu, J. Gao, H. Zhao, C. Zhang, X. Mao,
Data‐driven and artificial intelligence accelerated steel material research and intelligent manufacturing technology,
MGE Advances 1 (2023) e10. 10.1002/mgea.10.
RN344
S. Han, C. Wang, Y. Zhang, W. Xu, H. Di,
Employing deep learning in non-parametric inverse visualization of elastic–plastic mechanisms in dual-phase steels,
MGE Advances 2 (2024) e29. https://doi.org/10.1002/mgea.29.
RN247
Q. Wei, B. Cao, H. Yuan, Y. Chen, K. You, S. Yu, T. Yang, Z. Dong, T.-Y. Zhang,
Divide and conquer: Machine learning accelerated design of lead-free solder alloys with high strength and high ductility,
npj Comput. Mater. 9 (2023) 201. 10.1038/s41524-023-01150-0.
RN248
L. Jiang, Z. Zhang, H. Hu, X. He, H. Fu, J. Xie,
A rapid and effective method for alloy materials design via sample data transfer machine learning,
npj Comput. Mater. 9 (2023) 26. 10.1038/s41524-023-00979-9.
RN249
H. Fu, H. Zhang, C. Wang, W. Yong, J. Xie,
Recent progress in the machine learning-assisted rational design of alloys,
Int. J. Miner. Metall. Mater. 29 (2022) 635-644. 10.1007/s12613-022-2458-8.
RN175
A. Molkeri, D. Khatamsaz, R. Couperthwaite, J. James, R. Arróyave, D. Allaire, A. Srivastava,
On the importance of microstructure information in materials design: PSP vs PP,
Acta Mater. 223 (2022) 117471. https://doi.org/10.1016/j.actamat.2021.117471.
RN173
C. Shen, C. Wang, X. Wei, Y. Li, S. van der Zwaag, W. Xu,
Physical metallurgy-guided machine learning and artificial intelligent design of ultrahigh-strength stainless steel,
Acta Mater. 179 (2019) 201-214. https://doi.org/10.1016/j.actamat.2019.08.033.
RN304
I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio,
Generative adversarial nets,
Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, Montreal, Canada, 2014, pp. 2672–2680.
RN302
D.P. Kingma, M. Welling,
Auto-Encoding Variational Bayes,
International Conference on Learning Representations, 2013.
RN187
Z. Li, W.T. Nash, S.P. O'Brien, Y. Qiu, R.K. Gupta, N. Birbilis,
cardiGAN: A generative adversarial network model for design and discovery of multi principal element alloys,
J. Mater. Sci. Technol. 125 (2022) 81-96. https://doi.org/10.1016/j.jmst.2022.03.008.
RN188
Z. Pei, K.A. Rozman, Ö.N. Doğan, Y. Wen, N. Gao, E.A. Holm, J.A. Hawk, D.E. Alman, M.C. Gao,
Machine-Learning Microstructure for Inverse Material Design,
Adv. Sci. 8 (2021) 2101207. https://doi.org/10.1002/advs.202101207.
RN217
Z. Rao, P.Y. Tung, R. Xie, Y. Wei, H. Zhang, A. Ferrari, T.P.C. Klaver, F. Körmann, P.T. Sukumar, A. Kwiatkowski da Silva, Y. Chen, Z. Li, D. Ponge, J. Neugebauer, O. Gutfleisch, S. Bauer, D. Raabe,
Machine learning-enabled high-entropy alloy discovery,
Science 378 (2022) 78-85. 10.1126/science.abo4940.
RN177
Z. Cao, Q. Liu, Q. Liu, X. Yu, J.J. Kruzic, X. Li,
A machine learning method to quantitatively predict alpha phase morphology in additively manufactured Ti-6Al-4V,
npj Comput. Mater. 9 (2023) 195. 10.1038/s41524-023-01152-y.
RN178
N. Kusampudi, M. Diehl,
Inverse design of dual-phase steel microstructures using generative machine learning model and Bayesian optimization,
Int. J. Plast. 171 (2023) 103776. https://doi.org/10.1016/j.ijplas.2023.103776.
RN179
Y. Kim, H.K. Park, J. Jung, P. Asghari-Rad, S. Lee, J.Y. Kim, H.G. Jung, H.S. Kim,
Exploration of optimal microstructure and mechanical properties in continuous microstructure space using a variational autoencoder,
Mater. Des. 202 (2021) 109544. https://doi.org/10.1016/j.matdes.2021.109544.
RN243
T. Che, Y. Li, A.P. Jacob, Y. Bengio, W.J.a.p.a. Li,
Mode regularized generative adversarial networks,
5th International Conference on Learning Representations, ICLR 2017, 2016.
RN224
Q. Lu, Q. Lai, Z. Chai, X. Wei, X. Xiong, H. Yi, M. Huang, W. Xu, J. Wang,
Revolutionizing car body manufacturing using a unified steel metallurgy concept,
Sci. Adv. 7 (2021). 10.1126/sciadv.abk0176.
RN191
J. Zhang, H. Di, Y. Deng, R.D.K. Misra,
Effect of martensite morphology and volume fraction on strain hardening and fracture behavior of martensite–ferrite dual phase steel,
Mater. Sci. Eng., A 627 (2015) 230-240. https://doi.org/10.1016/j.msea.2015.01.006.
RN192
M. Balbi, I. Alvarez-Armas, A. Armas,
Effect of holding time at an intercritical temperature on the microstructure and tensile properties of a ferrite-martensite dual phase steel,
Mater. Sci. Eng., A 733 (2018) 1-8. https://doi.org/10.1016/j.msea.2018.07.029.
RN193
Y.-G. Deng, H.-S. Di, J.-C. Zhang,
Effect of Heat-Treatment Schedule on the Microstructure and Mechanical Properties of Cold-Rolled Dual-Phase Steels,
Acta Metall. Sin. Engl. Lett. 28 (2015) 1141-1148. 10.1007/s40195-015-0305-x.
RN194
S.K. Paul, N. Stanford, T. Hilditch,
Effect of martensite volume fraction on low cycle fatigue behaviour of dual phase steels: Experimental and microstructural investigation,
Mater. Sci. Eng., A 638 (2015) 296-304. https://doi.org/10.1016/j.msea.2015.04.059.
RN259
S. Lin, R. Clark, N. Trigoni, S. Roberts,
Uncertainty Estimation with a VAE-Classifier Hybrid Model,
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 3548-3552.
RN260
P.D. Alfano, M. Rando, M. Letizia, F. Odone, L. Rosasco, V.P. Pastore,
Efficient Unsupervised Learning for Plankton Images,
2022 26th International Conference on Pattern Recognition (ICPR), 2022, pp. 1314-1321.
RN265
BaoSteel,
Family of ultra-high strength steels for Baosteel's automotive plates,
https://ecommerce.ibaosteel.com/portal/download/manual/X-GPa.pdf, 2023 (accessed 16 June 2023).
RN264
Z.W. Tone Jiena,
Analysis on conversion of elongation in different standard distance of metallic material,
J. Shanghai Coll. Metall 20 (1999) 40-44.
RN262
J. Schindelin, I. Arganda-Carreras, E. Frise, V. Kaynig, M. Longair, T. Pietzsch, S. Preibisch, C. Rueden, S. Saalfeld, B. Schmid, J.-Y. Tinevez, D.J. White, V. Hartenstein, K. Eliceiri, P. Tomancak, A. Cardona,
Fiji: an open-source platform for biological-image analysis,
Nat. Methods 9 (2012) 676-682. 10.1038/nmeth.2019.
RN200
K. He, X. Zhang, S. Ren, J. Sun,
Deep Residual Learning for Image Recognition,
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778.
RN292
L. van der Maaten, G. Hinton,
Viualizing data using t-SNE,
J. Mach. Learn. Res. 9 (2008) 2579-2605.
Supplemental Materials for
Creating a Microstructure Latent Space with Rich Material Information for Multiphase Alloy Design
Xudong Ma^a, Yuqi Zhang^a, Chenchong Wang^a, *, Ming Wang^b, *, Mingxin Huang^b, *, Wei Xu^a, *
^a State Key Laboratory of Rolling and Automation, Northeastern University, Shenyang, Liaoning 110819, China
^b Department of Mechanical Engineering, The University of Hong Kong, Hong Kong, China
* Corresponding Author
This supplementary material includes:
Supplementary Figure <ref>. Details of data on dual-phase (DP) steels gathered from literature.
Supplementary Figure <ref>. Mean absolute error for two series of machine learning models on ultimate tensile strength (UTS) and uniform elongation (UE).
Supplementary Figure <ref>. Comparison of mechanical property prediction capabilities among existing methods, and the CPP and CPMP methods.
Supplementary Figure <ref>. Trends in the number of alloys meeting DP980 performance requirements and the percentage of reasonable samples (accuracy), with an increasing standard deviation of the normal distribution ε.
Supplementary Figure <ref>. Tensile curves of alloys under varying intercritical annealing process parameters.
Supplementary Figure <ref>. SEM micrographs of samples of DP steels inter-critically annealed at different annealing temperatures and times.
Supplementary Table <ref>. Output distribution in the current dataset.
Supplementary Table <ref>. Composition and intercritical annealing process details for various designed DP steels.
Supplementary Table <ref>. Composition and intercritical annealing process details for the experimental alloys.
Supplementary Note 1. Training strategy.
Supplementary Reference
Supplementary Note 1. Training strategy.
The VAE–DLM has a total loss that can be expressed as follows:
L = L_KL + L_CE + 300 ( L_CPP + L_CP )
where L_KL represents the Kullback–Leibler divergence, and L_CE denotes the cross-entropy loss. L_CPP encompasses the regression losses of C, Mn, annealing temperature, annealing time, and classification loss of the process category, whereas L_CP includes the regression losses of the UTS, UE, and MVF. The model was optimized over 3500 epochs, using a batch size of 64 within the PyTorch framework [6]. To achieve an optimal balance between the prediction accuracy and image reconstruction quality, we employed a step learning rate decay strategy with an initial learning rate of 0.001, coupled with the Adam optimization algorithm [7]. The learning rate decay strategy entailed a specific reduction in the learning rate to 0.9 times its current value after every 50 epochs.
Supplementary Reference
* J. Platt, Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods, Advances in large margin classifiers. 10 (1999) 61-74.
* L. Breiman, Random Forests, Mach. Learn. 45 (2001) 5-32. 10.1023/A:1010933404324.
* G.E. Hinton, Connectionist learning procedures, Artif. Intell. 40 (1989) 185-234. https://doi.org/10.1016/0004-3702(89)90049-0.
* J.H. Friedman, Greedy Function Approximation: A Gradient Boosting Machine, Ann. Stat. 29 (2001) 1189-1232. 10.2307/2699986.
* T. Chen, C. Guestrin, XGBoost: A Scalable Tree Boosting System, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, California, USA, 2016, pp. 785–794.
* A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, S. Chintala, PyTorch: An Imperative Style, High-Performance Deep Learning Library, 2019.
* D. Kingma, J. Ba, Adam: A Method for Stochastic Optimization, International Conference on Learning Representations, 2014.
|
http://arxiv.org/abs/2409.03694v1 | 20240905164714 | Inherited non-invertible duality symmetries in quiver SCFTs | [
"Riccardo Argurio",
"Andrés Collinucci",
"Salvo Mancani",
"Shani Meynet",
"Louan Mol",
"Valdo Tatitscheff"
] | hep-th | [
"hep-th"
] |
TOI-3568 b: a super-Neptune in the sub-Jovian desert
E. Martioli<ref>,<ref>
R. P. Petrucci <ref>,<ref>
E. Jofré <ref>,<ref>
G. Hébrard <ref>,<ref>
L. Ghezzi <ref>
Y. Gómez Maqueo Chew <ref>
R. F. Díaz <ref>
H. D. Perottoni <ref>
L. H. Garcia <ref>
D. Rapetti <ref>,<ref>
A. Lecavelier des Etangs <ref>
L. de Almeida <ref>
L. Arnold <ref>
É. Artigau <ref>
R. Basant <ref>
J. L. Bean <ref>
A. Bieryla <ref>
I. Boisse <ref>
X. Bonfils <ref>
M. Brady <ref>
C. Cadieux <ref>
A. Carmona <ref>
N. J. Cook <ref>
X. Delfosse <ref>
J.-F. Donati <ref>
R. Doyon <ref>
E. Furlan <ref>
S. B. Howell <ref>
J. M. Jenkins <ref>
D. Kasper <ref>
F. Kiefer <ref>,<ref>
D. W. Latham <ref>
A. M. Levine <ref>
D. Lorenzo-Oliveira <ref>
R. Luque <ref>
K. K. McLeod <ref>
J. Melendez <ref>
C. Moutou <ref>
Y. Netto <ref>
T. A. Pritchard <ref>
P. Rowden <ref>
A. Seifahrt <ref>
G. Stefánsson <ref>
J. Stürmer <ref>
J D. Twicken <ref>,<ref>
Received xxxx ; accepted xxxx
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
In recent years, symmetries in the context of Quantum Field Theory (QFT) have received a new paradigmatic formulation as topological defects <cit.>. One of the most noticeable consequences of this paradigm is the existence of non-invertible symmetries. Such symmetries had already been described in the context of d=2 Rational Conformal Field Theories (RCFTs) <cit.> and the d=3 Topological Quantum Field Theories (TQFTs) related to them <cit.>. Instead of having a group like structure, non-invertible symmetries enjoy a ring-like one, a × b = ∑_i c_i, and therefore not all elements admit an inverse. Topological operators enjoying such fusion rules, despite not being invertible symmetries, can still be used to study RG-flows and Ward identities of QFTs, putting constraints on the dynamics of the system <cit.>.
In d=4, a particularly helpful playground to study these generalized symmetries is 𝒩=4 super Yang-Mills (SYM), which does in fact admit non-invertible symmetries. More precisely, 𝒩=4 SYM enjoys a duality group, SL(2,ℤ), that acts on the complexified gauge coupling τ_SYM via modular transformations. This is the so-called Montonen-Olive duality <cit.>. Remarkably, this SL(2,ℤ) action admits special values of τ_SYM, namely i and e^2π i/3, that are left invariant under a discrete subgroup: ℤ_2 and ℤ_3 respectively. At those fixed points of the conformal manifold, duality transformations become non-invertible symmetries of the theory <cit.>. The non-invertibility stems from the fact that the symmetry is actually the composition of a duality transformation with a topological manipulation that reverses the effect of the self-duality transformation on the gauge group, i.e. it relates the two different global variants <cit.> of the gauge algebra by gauging a 1-form symmetry. The literature on the subject is vast, for a sample see <cit.>.
A richer set of theories, constrained enough to be reliably studied, are the so-called class 𝒮 theories of <cit.>.
The data describing these theories is encoded in Riemann surfaces with marked points and they admit, as 𝒩=4 SYM, a duality group given by the Mapping Class Group (MCG) of the surface. As one might expect, these theories also admit non-invertible symmetries precisely when the duality group preserves the couplings of the theory while altering the global structure of the gauge group, see for instance <cit.> for a study of some class S theories.
Symmetries, including non-invertible ones, are particularly interesting if they are preserved along an RG flow, since they can then constrain, or predict, some properties of the IR theory at the end of the flow. The simplest RG flows are those triggered by mass terms. In =2 (and =4) theories, the latter usually partially break supersymmetry to =1. If the =2 theory enjoys non-invertible symmetries, one can turn on mass terms that preserve them, and the IR theory is then expected to enjoy the same non-invertible symmetries. The analysis depends on whether the IR theory is gapped, or an SCFT. A class of gapped RG flows was considered in <cit.>. The case of the flows from some specific =2 class S theories to =1 SCFTs was also briefly considered in the same reference, see also <cit.>.
One of the aims of this paper is to systematically discuss duality symmetries, and the mass deformations preserving them, in two broad classes of =2 SCFTs, namely the A_n-1 and the D_n quiver gauge theories. Such class S theories have appeared in string/M-theory constructions: they can be alternatively seen to arise from D3-branes at ℂ^2/Γ×ℂ singularities in type IIB <cit.>, from D4-branes suspended between NS5-branes in type IIA <cit.>, and from M5-branes wrapping complex surfaces in M-theory <cit.>. All such descriptions are related to each other by string dualities.
Starting from the A_n-1 quiver gauge theory, in <ref> we describe the duality group in terms of the mapping class group of a complex torus with n unordered marked points, revisiting the analysis of <cit.>. This is best understood from the M-theory uplift. Using this result, it is possible to classify point configurations that are invariant under the duality group, leading to non-invertible symmetry defects <cit.>. We then generalize this construction to the D_n quiver gauge theories, which has been mostly discussed in the type IIA setting <cit.>. The M-theory uplift of this theory consists of M5-branes inserted as marked points on a quotiented torus, i.e. a pillowcase. The duality group is the mapping class group of this object, which we determine. We then proceed to study the presence of non-invertible defects in these models as well, again by finding which elements of the MCG fix the modular parameter and the marked points. Finally, despite not having a Riemann surface describing the E_n quiver theories,[See <cit.> for an attempt towards this goal.] what we learned from the other cases allows us to make an educated guess of the structure for their duality groups.
In <ref>, we turn our attention to mass deformations of A_n-1 and D_n quiver gauge theories, studying the action of the duality group on them. In both cases, an important distinction is made whether an overall mass parameter, the “global mass", is zero or not. In the former case, only permutations of points act on the masses, while in the latter, SL(2,ℤ) transformations act on them as well. This overall mass parameter plays an important role also in determining the moduli space of the SCFT that is supposed to exist in the IR of such RG flows, as we discuss in <ref>. Indeed, the moduli space of the starting theory is a three-fold algebraic variety given by the direct product of a Du Val type singular surface times ℂ. After the mass deformation, the flow brings the theory to another supersymmetric theory, whose moduli space is either a compound Du Val three-fold or a Du Val two-fold. The former (locally) is a non-trivial fibration of ℂ^2/Γ over ℂ, while the latter is just ℂ^2/Γ. A vanishing global mass triggers a deformation leading to three-dimensional moduli space, while a non-vanishing one leads to a two-dimensional one.
In <ref>, we finally turn our attention to mass deformations that preserve duality defects. We prove that such a deformation always exists for both the A_n-1 and D_n. We find all the non-invertible defect-preserving mass deformations for those theories, discussing some selected examples. Our main result is thus a characterization of N=1 SCFTs which enjoy non-invertible duality symmetries inherited from their N=2 parent SCFTs.
§ 𝒩=2 QUIVER SCFTS, DUALITIES AND SYMMETRIES
In this section, we first review the brane/geometric construction that at low energy leads to an affine 4d 𝒩=2 ADE-type quiver SCFTs, and then also discuss the form of their duality groups. These theories are well known in the context of the AdS/CFT correspondence, where they are realized as the world volume theory of D3-branes probing Du Val surfaces, i.e. orbifolds of the form ℂ^2/Γ with Γ a finite subgroup of SU(2). Moreover, at least A_n-1 and D_n quivers also admit a class 𝒮 realization which is crucial for understanding their duality groups, as we will review shortly.
In the framework of theories of class 𝒮, 4d 𝒩=2 theories are engineered by wrapping M5-branes on a genus g Riemann surface with n marked points, to which we will also refer as punctures, with a partial topological twist <cit.>. Let Σ_g,n be the underlying smooth surface.[A smooth surface is a smooth manifold of real dimension two. We only consider surfaces of finite type. Smooth surfaces of finite type are entirely determined by their genus g and number of punctures n. A Riemann surface is a smooth surface endowed with a complex structure. In general there are many inequivalent complex structures with which a fixed smooth punctured surface can be endowed; more precisely, the real dimension of the Teichmüller space 𝒯(Σ_g,n) is 6g-6+2n. Given a Riemann surface in 𝒯(Σ_g,n), we interpret the punctures as marked points.] When the construction leads to an SCFT, the conformal manifold of the latter is the Teichmüller space 𝒯(Σ_g,n). By definition, 𝒯(Σ_g,n) is the space of all complex structures with which Σ_g,n can be endowed, up to diffeomorphims of Σ_g,n homotopic to the identity. The duality group of the theory is then embodied as the mapping class group MCG(Σ_g,n), which is the group of orientation-preserving diffeomorphims of Σ_g,n, modulo diffeomorphisms connected to the identity.[This can be described as the group of “large" diffeomorphisms modulo “small" ones, borrowing the usual gauge theory nomenclature for transformations that cannot or can, respectively, be deformed to the identity.] Indeed, the MCG does not change the physical properties of the configuration, but it acts non-trivially on 𝒯(Σ_g,n), relating different points of the conformal manifold <cit.>. We refer to <cit.> for a general introduction to class 𝒮 theories.
After computing the duality groups of A_n-1 and D_n quivers, and proposing a description for E_6,7,8 quivers, we discuss the interplay between dualities and 1-form symmetries, and compute the locus in the conformal manifold at which non-invertible symmetries are realized.
§.§ A_n-1 quivers from class 𝒮
We first consider 4d 𝒩=2 quivers gauge theories shaped like affine A_n-1 Dynkin diagrams, which we will refer to as A_n-1 theories for convenience. These consist of n SU(k) gauge factors with bifundamental hypermultiplets, as shown in <ref>. We denote the gauge factors as SU(k)_1, …, SU(k)_n, and for each i=1,…,n modulo n there is an adjoint field ϕ_i and a pair of chiral multiplets X_i,i+1, X_i+1,i in the representation ( (1)_i, (1)_i+1), ( (1)_i+1, (1)_i) of SU(k)_i×SU(k)_i+1. The superpotential is the minimal one compatible with 𝒩=2 supersymmetry, that is:
W_𝒩=2 = ∑_i=1^nϕ_i ( X_i,i+1X_i+1,i - X_i,i-1X_i-1,i) .
These theories can be realized in type IIA string theory as the worldvolume theory of a stack of k D4-branes suspended between n NS5-branes along a circle: these are the elliptic models of <cit.>. More precisely, one considers type IIA string theory in ℝ^1,3×ℝ^2_4,5× S^1_6×ℝ^3_7,8,9, with n NS5-branes extending along ℝ^1,3×ℝ^2_4,5 and k D4-branes along ℝ^1,3× S^1_6, all at the same point in ℝ^3_7,8,9, as shown in <ref>.
A cartoon of this brane setup in ℝ^2_4,5× S^1_6 is shown on the right hand side of <Ref>.
A dual description in type IIB string theory of this configuration is obtained as the worldvolume theory of a stack of k D3-branes transverse to ℂ^2/ℤ_n×ℂ. The two descriptions are related by T-duality along x^6 and decompactification.
Such theories are superconformal at any value of the gauge couplings. The inverse gauge coupling squared of a given node is proportional to the distance between the corresponding NS5-branes along x^6 <cit.>. More precisely, if the circle x^6 has length 2π R_6 and g_s denotes the string coupling, then:
1/g_i^2 = x_i+1^6-x_i^6/8π g_s R_6 .
We are now interested in the uplift of such configurations to M-theory, where the relationship between the elliptic brane model and the class 𝒮 construction is manifest.
Let 2π R_10 be the length of the M-theory circle S^1_10. As emphasized in <cit.>, the metric on S^1_6× S^1_10 is not necessarily the product metric: the shift x^6→ x^6+2π R_6 can in general be accompanied by a shift x^10→ x^10+θ R_10, where θ is some angle. Let:
τ = θ/2π + i/8π g_s ,
so that the torus metric is the natural flat metric on the elliptic curve E_τ with modulus τ in the complex upper-half plane ℍ.
In the uplift to M-theory both D4 and NS5-branes become M5-branes: the former correspond to M5-branes wrapping the elliptic curve E_τ, whereas the latter are interpreted as boundary conditions for the worldvolume theory on the stack of k M5-branes at marked points on E_τ. Thus, this set-up can be reformulated in the class 𝒮 framework, where the Riemann surface is the elliptic curve E_τ with n marked points, with underlying smooth surface Σ_1,n. Let p_1,…p_n ∈ E_τ denote the positions of the marked points as in <ref>.
The mapping class group of Σ_1,n can be obtained from the one of the closed torus 𝕋^2 via the Birman exact sequence <cit.>, and it can be described explicitly as follows. We consider the universal cover of the curve E_τ together with the lifts of the marked points, as depicted on the right of <ref>. The elliptic curve E_τ can be presented as E_τ=ℂ/Λ, where Λ⊂ℂ is the lattice ℤ+τℤ. Let us fix a starting notation for the lifts of p_1,…p_n ∈ E_τ: p_1:=p_1^0,0,…,p_n:=p_n^0,0 denote the lifts in the fundamental parallelogram {0,1,τ,1+τ}, while those in the parallelogram {k+lτ,k+1+lτ,k+(l+1)τ,k+1+(l+1)τ} are denoted p_1^k,l,…,p_n^k,l. When the configuration of marked points on E_τ is generic, there is a way to label them that is suitable for the physical interpretation of the setup, which is such that Im(p_1)<Im(p_2)<…<Im(p_n). We will explain in which sense it is suitable for physics shortly.
The generators of the mapping class group are of three types:
* Mapping classes of the torus. The modular group SL(2,ℤ) acts as a change of basis for the lattice Λ. The action of the standard generators T and S is the following: T:(1,τ)→ (1,τ+1), whereas S encodes the combined operation (1,τ)→ (τ,-1)≃ (1,-1/τ). Because of the rescaling by 1/τ, the generator S acts non-trivially on the marked points: if p∈ℂ is a lift of a puncture then S p = p/τ. The action of S is depicted in <ref>.
* Deck transformations. They are defined as changing the choice of lifts of the marked points,[Here we use the standard expression “deck transformations" from the theory of coverings in a loose sense, as actual deck transformations would act on all marked points together. However, this terminology makes manifest the relation between the fundamental group of the Riemann surface and its braiding action on a given marked point.] and are generated by t_i^(1):p_i→ p_i+1 and t_i^(τ):p_i→ p_i+τ for i=1,…,n. The red arrows in <ref> depict the action of t_1^(τ). In other words after acting with a deck transformation the lifts denoted p_i, i=1,…,n, are not necessarily in the fundamental parallelogram {0,1,τ,1+τ} anymore.
* Permutations of the punctures. We describe these transformations in the universal cover of the torus; the generators are denoted s_i, i=1,…,n, where s_i for any i=1,…,n-1 exchanges p_i and p_i+1, whereas s_n exchanges p_n with p_1+τ = p_1^0,1. This specificity in the definition of s_n echoes in <ref> below.
Denoting p⃗ = (τ ; p_1, …, p_n), the generators of the mapping class group act as
S : p⃗ ⟼( -1/τ; p_1/τ, p_2/τ, …, p_n/τ) ,
T : p⃗ ⟼( τ + 1; p_1, p_2, …, p_n ) ,
t_i^(1) : p⃗ ⟼( τ; p_1, p_2, …, p_i + 1, …, p_n ) ,
t_i^(τ) : p⃗ ⟼( τ; p_1, p_2, …, p_i + τ, …, p_n ) ,
s_i : p⃗ ⟼( τ; p_1, p_2, …, p_i-1, p_i+1, p_i, p_i+2,…, p_n ) ,
s_n : p⃗ ⟼( τ; p_n - τ , p_2, …, p_1 +τ) ,
This transposes to the physical theory as follows. After lifting to M-theory, the complexified gauge couplings of each node of the quiver are recovered as differences in position of neighbouring marked points on E_τ <cit.>. Let
τ_i = p_i+1-p_i , i ≠ n
τ_n = τ + p_1 - p_n
where the seemingly special definition of τ_n follows from the fact that ∑τ_i = τ. The “physical" labelling of the punctures described above ensures that 4πIm(τ_i) = g_i^-2 >0 for all i, where g_i is the gauge coupling of the i-th gauge group of the quiver gauge theory.
Given <ref> and denoting τ⃗=(τ;τ_1,…,τ_n), one can recast the action of the generators of the MCG as
S : τ⃗→( -1/τ; τ_1/τ, τ_2/τ, …, τ_n/τ - 1 - 1/τ) ,
T : τ⃗→( τ + 1; τ_1, τ_2, …, τ_n + 1 ) ,
t_i^(1) : τ⃗→( τ; τ_1, τ_2, …, τ_i-1 + 1, τ_i - 1, …τ_n ) ,
t_i^(τ) : τ⃗→( τ; τ_1, τ_2, …, τ_i-1 + τ, τ_i - τ, …, τ_n ) ,
s_i : τ⃗→( τ; τ_1, τ_2, …, τ_i-1 + τ_i, -τ_i, τ_i+1 + τ_i, …, τ_n ) .
All s_i together generate the affine Weyl group of type A_n-1, as is usual for brane configurations on a circle <cit.>. Together with the transformations t_i^(τ), they generate the group of automorphisms of the A_n-1 root system, that is, the co-central extension of the affine Weyl group of type A_n-1 by the group of outer automorphisms of the affine Lie algebra of type A_n-1. We refer to <cit.> for more details on affine Lie algebras and Weyl groups, here we simply discuss how the correspondence is achieved. Let us consider the vertical band of fundamental parallelograms containing the vertices {0,1,τ,1+τ}. The differences p_i^0,a-p_j^0,b where i,j=1,…,n and a,b∈ℤ, define the affine root lattice of type A_n-1. The standard positive simple roots are the τ_i defined in <ref>, where τ_n = τ_0 is the affine simple root and the shift by τ embodies the single imaginary root of the affine root system. The whole group of automorphisms of the lattice is generated by the s_i and t_i^(τ), for i=1,…,n.[Let us note that there are relations between the generators of the MCG, for example t_i^1=S^-1(t_i^τ)^-1S. Moreover, the outer automorphism ω of the A_n-1 algebra can be expressed as ω = t_1^(τ)s_1s_2… s_n-1.]
The description of the MCG in terms of the automorphisms of an affine root system will be used in <ref> to argue the structure of the duality group for theories for which an explicit class 𝒮 construction is not known.
§.§ D_n quivers from class 𝒮
We now turn to the description of the duality group of D_n quiver gauge theories. An example of such a quiver is shown on the left of <ref>. The gauge group of the theory is a product of (n-3) copies of SU(2k) and 4 copies of SU(k), there is an adjoint field ϕ_i for each gauge factor and there are bifundamental hypermultiplets X_i,j as to make the corresponding affine quiver of type D_n, and the superpotential required by 𝒩=2 supersymmetry
W_𝒩=2 = ∑_i=0,1ϕ_i X_i,2 X_2,i + ∑_j=n-1,nϕ_j X_j,n-2 X_n-2,j + ϕ_2 ( X_2,0 X_0,2 + X_2,1 X_1,2 + X_23 X_32)
+ ϕ_n-2( X_n-2,n-1 X_n-1,n-2 + X_n-2,n X_n,n-2 - X_n-2,n-3X_n-3,n-2)
+ ∑_l=3^n-3ϕ_l ( X_l,l+1X_l+1,l - X_l,l-1X_l-1,l) ,
where we refer to <ref> for the index conventions of the fields.
Let us discuss the type IIA brane setups that realize these theories, following <cit.> (see also <cit.>). The relevant configurations of branes are described in <ref>: 2k D4-branes suspended between n NS5-branes along a segment with endpoints ONS5^--planes, these last ones being orientifold-like 5-plane magnetically charged under the Neveu–Schwarz B_2 (and not under the Ramond–Ramond field C_2).[ONS5^--planes are obtained from type IIA O4^--planes by uplift to M-theory and compactification back to type IIA string theory on a transverse orbifolded circle S^1/ℤ_2 <cit.>.]
D4-branes end either on the NS5-brane closest to the ONS5^- plane or on its image, as shown in <ref>. In particular, the states corresponding to D4-branes stretching between this NS5-brane and its image are projected out by the orientifold <cit.>. This explains how one obtains the characteristic ends of the D_n quiver. Equivalently, one can bring an NS5-brane atop each ONS5^--plane; the worldvolume theory on this composite object is a 6d O(2)=ℤ_2⋉SO(2) gauge theory. D4-branes ending on such a composite object carry a charge ±1 for the O(2) gauge theory.[Another way to see this is discussed in <cit.>.] The superconformal configurations are those in which the stack of 2k D4-branes splits in two sub-stacks of k D4-branes at each half ONS5^--plane; equivalently half the D4-branes have charge + and the other half, charge -. This configuration realizes the D_n quiver theory as the worldvolume theory on the D4-branes.
Just as in the A_n-1 case, this quiver gauge theory can be obtained in Type IIB as the worldvolume theory of a stack of k D3-branes transverse to ℂ^2/Γ_D_n×ℂ, with Γ_D_n the corresponding finite subgroup of SU(2).
The uplift to M-theory of D4 and NS5-branes happens exactly as in the A_n-1 case, while the ONS5^- become OM5-planes inducing the involution I_C_3ℐ_5, where ℐ_5 acts by reversing the coordinates transverse to the OM5-plane and I_C_3 reverses the sign of the M-theory 3-form. The uplift yields M-theory on
ℝ^1,3×ℝ^2 ×(E_τ×ℝ^3)/ℤ_2 ,
where we have combined the x_6 direction and the M-theory circle S^1_10 into an elliptic curve E_τ as before, and with M5-branes either wrapping the torus (D4), becoming marked points (NS5) or OM5-planes located at the four fixed points on E_τ (ONS5^-) <cit.>.
As in the previous section, this construction allows to study the duality group of the theory in term of the MCG of the quotiented torus. To this end we now turn to the description of the quotient geometry, which will give us insight on how to construct the MCG.
§.§.§ Quotient surface and its universal cover
The action z↦-z on the elliptic curve ℂ/Λ, where Λ = ℤ+τℤ, has four fixed points in the standard fundamental parallelogram {0,1,τ,1+τ}:
ζ_A = 0, ζ_B = 1/2, ζ_C = τ/2 and ζ_D = (1+τ)/2 .
The quotient space 𝕋^2/_2 is topologically a sphere with four ℤ_2-orbifold points ζ_A,ζ_B,ζ_C and ζ_D, often dubbed pillowcase and depicted in <ref>. Let us denote a,b,c and d the homotopy classes of small loops around ζ_A,ζ_B,ζ_C and ζ_D respectively. Since these are ℤ_2-orbifold points, one has a^2=b^2=c^2=d^2=1. The (orbifold) fundamental group of the pillowcase is given by:
π_1 (𝕋^2/ℤ_2) = ⟨ a,b,c,d | a^2=b^2=c^2=d^2=1, ba=dc.⟩ .
This can be obtained as follows.
In the double cover 𝕋^2 of the space 𝕋^2/ℤ_2, we can define the reflections with respect to ζ_A, ζ_B, ζ_C and ζ_D respectively, as depicted in <ref>, which act as:
R_A : p↦ -p ,
R_B : p↦ 1-p ,
R_C : p↦τ-p ,
R_D : p↦ (1+τ)-p .
The fundamental group π_1(𝕋^2/ℤ_2) is generated by {R_A,R_B,R_C,R_D}, and one has R_B ∘ R_A (p) = R_D ∘ R_C (p) = p + 1 = t^(1)(p) and R_C ∘ R_A (p) = R_D ∘ R_B (p) = p + τ = t^(τ)(p).
Correspondingly, π_1(𝕋^2) embeds in π_1(𝕋^2/ℤ_2) as a subgroup of order 2.
§.§.§ The D_n-ality group
As in the A_n-1 case, we obtain the generators of the duality group in the D_n case by considering the lifts of the marked points on E_τ/ℤ_2, in the universal cover . Note that n marked points on E_τ/ℤ_2, none of which sits at an orbifold point, correspond to 2n marked points on E_τ consisting of n symmetric pairs with respect to the center of the fundamental cell.
One can choose one half of the fundamental cell, for example the bottom one as in <ref>, and label the lifts of the marked points sitting in it p_1,…,p_n.[We assume the configuration of marked points to be generic.] The shifts of these lifts by elements of the lattice are as before labeled p_i^k,l = p_i+k+lτ, with k,l∈ℤ. Last, for all i,k,l we let q_i^k,l=-p_i^k,l be the image of p_i^k,l reflected about the origin. There is a labeling adapted to physics, which is such that Im(p_1)<…<Im(p_n). This setup is shown in <ref>.
As before, the generators of the duality group are of three types:
* Modular group. The modular group SL(2,ℤ) acts as in the A_n-1 case:
𝒮 :(τ;p_1,…,p_n)↦(-1/τ;p_1/τ,…,p_n/τ) ,
𝒯 :(τ;p_1,…,p_n)↦(τ+1;p_1,…,p_n) .
* Deck transformations. The group of deck transformations of →𝕋^2/_2 is generated by {R_A,R_B,R_C,R_D}, hence each marked point is transformed as:
R_A,i : (τ;p_1,…,p_n)↦(τ;p_1,…,-p_i,…,p_n) ,
R_B,i : (τ;p_1,…,p_n)↦(τ;p_1,…,1-p_i,…,p_n) ,
R_C,i : (τ;p_1,…,p_n)↦(τ;p_1,…,τ-p_i,…,p_n) ,
R_D,i : (τ;p_1,…,p_n)↦(τ;p_1,…,(1+τ)-p_i,…,p_n) ,
with the same relations as the ones that {R_A,R_B,R_C,R_D} satisfy.
* Permutation of the punctures. Punctures can be permuted, with generators:
s_i : (τ;p_1,…, p_i, p_i+1,…,p_n)↦(τ;p_1,…,p_i+1,p_i,…,p_n)
for i=1,…,n-1. One could add an additional generator s_n which would exchange p_n and q_n^-1,-1, however s_n=R_D,n which we have already taken into account.
Similarly to what we did in the A_n-1 case, we now turn to the “physical" picture in which we discuss complexified gauge couplings instead of punctures, defined as follows:
τ_0 = p_2+p_1 ,
τ_i = p_i+1-p_i , i=1, … , n-1
τ_n =τ-p_n-p_n-1 .
Again, the physical labelling ensures that each of the τ_i has positive imaginary part. The couplings satisfy the following relation:
τ_0 + τ_1 + τ_n-1 + τ_n + 2 ∑_i=2^n-2τ_i = τ.
Letting n_0=n_1=n_n-1=n_n=1 and n_i=2 for 1≤ i≤ n-1, the previous equations rewrites as
∑_i=0^n n_iτ_i = τ .
These weights are nothing but the Dynkin labels of the affine D_n Dynkin graph, or equivalently, the ranks of the nodes in the McKay graph corresponding to dihedral groups.
Using the action in the marked point basis and <ref> and denoting as before τ⃗=(τ;τ_0,…,τ_n), one finds that the action of the duality group in the coupling basis reads
S : τ⃗⟼(-1/τ;τ_0/τ,…,τ_n-1/τ,τ_n/τ-1-1/τ) ,
T : τ⃗⟼ (τ+1;τ_0,…,τ_n-1,τ_n+1) ,
and the action of a deck transformation acts as
R_I,i : τ⃗⟼
(τ; 2ζ_I+τ_1,τ_0-2ζ_I,…,τ_n) (i=1),
(τ; 2ζ_I-τ_1,2ζ_I-τ_0,τ_0+τ_1+τ_2-2ζ_I,…,τ_n) (i=2),
(τ; τ_0,…,2ζ_I-∑_k=0^i-1n_kτ_k+τ_i-1,-2ζ_I-∑_k=0^in_kτ_k+τ_i,…,τ_n) (3≤ i≤ n-2),
(τ; τ_0,…,2ζ_I+τ_n-2+τ_n-1+τ_n-τ, τ-τ_n-2ζ_I, τ-τ_n-1-2ζ_I) (i=n-1),
(τ; τ_0,…,2ζ_I+τ_n-τ, -2ζ_I+τ_n-1+τ) (i=n),
for I=A,B,C,D. The structure of these transformations can be written more economically by using the partial sums
P(i)=∑_k=0^in_kτ_k = 2 p_i+1 ,
in terms of which one obtains
R_I,i : τ⃗⟼
(τ;2ζ_I+P(1)-τ_0,-2ζ_I+P(1)-τ_1,τ_2,…,τ_n) (i=1),
(τ;2ζ_I-P(1)+τ_0,2ζ_I-P(1)+τ_1,-2ζ_I+P(2)-τ_2,…,τ_n) (i=2),
(τ;τ_0,…,2ζ_I-P(i-1)+τ_i-1,-2ζ_I+P(i)-τ_i,…,τ_n) (3≤ i≤ n-2),
(τ; τ_0,…,2ζ_I-P(n-2)+τ_n-2,-2ζ_I+P(n)-τ_n,-2ζ_I+P(n)-τ_n-1) (i=n-1),
(τ; τ_0,…,2ζ_I-P(n)+τ_n,-2ζ_I+P(n)+τ_n-1) (i=n) .
Finally, the permutations act on the couplings as
s_i : τ⃗⟼
(τ; τ_0 , - τ_1 , τ_2 + τ_1 , τ_3 , … , τ_n) (i=1),
(τ;τ_0 + τ_2 , τ_1 + τ_2 , - τ_2 , τ_3 + τ_2 , τ_4 , … , τ_n) (i=2),
(τ; τ_0, … , τ_i-1 + τ_i , - τ_i , τ_i+1 + τ_i , τ_i+2 , …τ_n) (3≤ i≤ n-2),
(τ; τ_0, … , τ_n-4 , τ_n-3 + τ_n-2 , - τ_n-2 , τ_n-1 + τ_n-2 , τ_n + τ_n-2 ) (i=n-2),
(τ; τ_0, … , τ_n-3 , τ_n-2 + τ_n-1 , - τ_n-1 , τ_n ) (i=n-1) .
The choice in <ref> makes manifest the relation between the couplings and the root lattice of the affine D_n algebra and indeed the above transformations generate the automorphisms of the affine root system.
This can be checked explicitly by writing the deck transformations associated with π_1(𝕋^2) in terms of the R_I generators. The set of t^(τ)_i, s_i, R_A and R_D can be matched with the generators of the automorphism group of the affine D_n algebra, comprising the Weyl group, see for example <cit.>.
§.§ Global variants and dualities
We now address higher form symmetries and duality symmetries that can arise in these quiver theories, discussing in particular how the mapping class group of the Riemann surface used to construct these theories plays a crucial role in both of these aspects.
Let us start by discussing the A_n-1 case. Recall that this theory can be obtained via a class 𝒮 construction, wrapping M5-branes on a torus with n punctures. If all the punctures are regular, one has a 1-form symmetry[More in general, the worldvolume theory of k M5-branes wrapping a Riemann surfaces Σ_g,n of genus g, with n regular punctures, has a ℤ_k^g 1-form symmetry.] <cit.>. This picture provides a clear understanding of how the duality group acts on the 1-form symmetry, as follows.
Indeed, the generators of the 1-form symmetry correspond to non-trivial homology 1-cycles of the Riemann surface <cit.>. For example, if the Riemann surface is an elliptic curve with underlying smooth surface the torus, the two generators of H_1(T^2,ℤ) correspond to the symmetry operators capturing the 1-form “electric" and “magnetic" symmetries, which are mutually dual. Correspondingly, the usual A and B cycles of the torus form a symplectic basis of H_1(T^2,ℤ) with respect to the intersection pairing. Because of this, the mapping class group of the surface can act non-trivially on global variants: any transformation which shuffles the generators of H_1(T^2,ℤ), will, in general, also change the global variant of the theory one considers.
The simplest example of this is 𝒩=4 𝔰𝔲(k) SYM, obtained by compactifying the 6d 𝒩=(2,0) SCFT of type A_k-1 on an elliptic curve without punctures. Every global variant of the theory has a non-trivial 1-form symmetry <cit.> (see <cit.> for the class S perspective). For example, when the gauge group is SU(k) the 1-form symmetry is purely electric ℤ_k^(1), and the charged objects are the Wilson loops. On top of that, 𝒩=4 SYM enjoys Montonen-Olive duality, exchanging “electric" and “magnetic" degrees of freedom, most notably Wilson and 't Hooft loops. These two facts are captured in the class 𝒮 realization of the theory: the 1-form symmetry descends from the reduction of the 2-form symmetry of the 6d SCFT on one of the cycles of the torus, and Montonen-Olive duality is embodied as modular transformations of the elliptic curve, i.e. the MCG of the underlying torus. In particular, an S-duality transformation exchanges the two cycles of the torus, and maps for example 𝒩=4 SU(k) SYM at gauge coupling τ to 𝒩=4 PSU(k)_0 SYM at gauge coupling -1/τ, with the notation of <cit.>. This is one version of Langlands duality <cit.>.
It turns out that the presence of regular punctures on the Riemann surface of the class 𝒮 construction plays no role as far as the 1-form symmetry is concerned. This follows from the fact that the additional 1-cycle introduced by each puncture has trivial intersection pairing with any other 1-cycle <cit.>. This means that only the SL(2,ℤ) subgroup of the mapping class group of Σ_1,n, more precisely, its S transformation, can shuffle global variants of the theory, while the other mapping classes have no effect on the 1-form symmetry of the theory, despite inducing genuine dualities.
The D_n case deserves more attention. Indeed, the Riemann surface in this case is an orbifold, for which the notions of intersection pairing and integer 1-cycles are subtle. Even before establishing what is the 1-form symmetry, we can ask what are the dualities that can change the global structure of these theories. By analogy with the A_n-1 case, we assume that the change in global structure takes place only for the transformations for which τ→ -1/τ. The only element of the mapping class group that acts on τ in this way is S. We thus conclude that also in the D_n case only the SL(2,ℤ) subgroup of the mapping class group can change global variants.
In order to establish what is the 1-form symmetry of these theories, let us again start from the A_n-1 case. From field theory, it is clear that in the SU(k)^n global variant, the ℤ_k 1-form symmetry acts on the k non-trivial Wilson lines, which are obtained by identifying the Wilson lines of each SU(k) group due to the presence of dynamical bifundamental matter fields. Performing the S operation, the Wilson lines become k non-trivial 't Hooft lines, meaning that the new global variant is SU(k)^n/ℤ_k, which is alternatively obtained by gauging the 1-form symmetry. For the D_n quiver with SU gauge groups, the bifundamentals are such that again there are only k non-trivial independent Wilson lines, so that the 1-form symmetry is still ℤ_k.[More generally, any balanced 𝒩=2 quiver theory without flavor nodes has a ℤ_k 1-form symmetry, where SU(k) is the smallest gauge group in the quiver.] As argued above, the action of S on the global structure of the gauge group is then exactly as in the A_n-1 case, i.e. the same as gauging the ℤ_k^(1).
§.§.§ Duality symmetries and orbits of marked points
We now describe how dualities of the field theory for A_n-1 and D_n quiver theories can enhance to symmetries at specific points of the conformal manifold. This is because, as we have just seen, some dualities change the global structure of the gauge group, but the latter change can be undone by gauging the 1-form symmetry. Hence if the duality leaves the coupling invariant, the combination of duality and gauging becomes a symmetry of a specific theory. Since a gauging is involved, these duality symmetries are in general non-invertible <cit.>.
For specific values of the τ_i, there might exist a non-trivial subgroup of the MCG leaving the field theory unchanged. The simplest example is the S transformation of 𝒩=4 𝔰𝔲(k) at τ=i, which leaves the local dynamics invariant, but changes the global variant. One can then recover the original theory by gauging the 1-form symmetry.
In the quiver theories of our interest, the same kind of symmetry transformation can be constructed. One starts by considering mapping classes which fix the configuration of marked points. When such operations act non-trivially on the global structure of the theory, one can compensate them by gauging (a subgroup of) the 1-form symmetry. As in the 𝒩=4 case, these two combined operations will, in general, lead to non-invertible symmetries.
The part of the MCG that acts non-trivially on the global variants of the theory is the modular one, moreover only the finite subgroups of SL(2,ℤ) that stabilize the coupling τ can lead to (non-invertible) symmetry defects. These subgroups are cyclic of order 2, 3, 4 or 6 and are generated by S^2, S^3T, S and ST respectively.[Another common choice for an element of order 3 is ST^-1; however, ST^-1 fixes exp(π i/3) rather than exp(2π i/3).]
The transformation S^2 leaves τ unchanged, and therefore can be a symmetry for any choice of τ, depending on the location of the punctures. Meanwhile, S, S^3T and ST can be symmetries only for fixed values of τ: τ=i for S and τ=exp(2 π i / 3) for both ST and S^3T. These transformations will be actual symmetries of the theory, depending on the position of the punctures, as we will discuss shortly.
Before proceeding, a comment is due concerning the terminology. The term `duality' refers in general to the type of relations between theories that are the object of the present paper. However, more specifically, `duality' often refers to the particular action S on the theory space that fixes τ=i. Now, as just emphasized above, such action is actually of order 4, since while S^2 sends any τ to itself, it acts as charge conjugation on the spectrum. As we will see, it permutes the punctures in the cases of our interest. Similarly, `triality' usually refers to both ST and S^3T because they send any τ to itself after acting three times, but actually their action on the spectrum is respectively of order 6 and 3. Though we will not use `tetrality' instead of duality, since there is no distinction to be made, in the case of triality we will use the term `hexality' when it is important to stress that we are referring to the action which is of order 6 on the spectrum.
Non-invertible defects of 𝒩=2 A_n-1 quiver SCFTs.
Let us consider the standard fundamental cell ℱ in ℂ for the torus E_τ=i which is invariant under the action of S, i.e. the parallelogram {0,1,i,1+i}. A point p∈ℱ is mapped to p/i, which amounts in a clockwise π/2 rotation with respect to the origin. The point is now out of the fundamental cell ℱ, as in <ref>. We can then use the deck transformation t to bring it back to ℱ:
p ⟶p/i+i = p-(1+i/2)/i+(1+i/2) .
The way in which the right-hand side is written makes explicit that t∘ S acts as a (-π/2)-rotation about the center (1+i)/2 of ℱ.
The points in ℱ split in orbits under the action of t∘ S, where t here denotes the combination of the deck transformations t for all the marked points. Generic orbits consist of four points; an example is the orbit {2,3,5,6} shown in <ref>, in which the points are permuted by t∘ S as 2→ 5 → 6 → 3 → 2, which in standard cycle notation reads (2 5 6 3). Apart from the generic orbits, there is one orbit of size two depicted by the purple rhombi in <ref>–the points 1 and 4 form an orbit of size two denoted (1 4)–and two orbits of size one, depicted as red squares. This way to represent points with non-trivial stabilisers is standard in the theory of wallpaper groups; in the present case, configurations of marked points in E_i invariant under t∘ S have as group of symmetries the wallpaper group denoted p4 (in crystallographic notation).
With the labeling on the left of <ref>, the position of the marked points 3, 5 and 6 is determined by the one of 2 and the requirement that the configuration is invariant under t∘ S:
p_3 = i p_2 + 1 , p_5 = -i p_2 + i , p_6 = -p_2 + i + 1 .
The combined operation t ∘ S permutes the punctures accordingly to their orbits under the (-π/2)-rotation. One can recover the original configuration by a suitable permutation σ; for example in <Ref> one has σ = (2 3 6 5)(1 4), or in terms of s_i it reads σ = s_3 s_2 s_1 s_4 s_3 s_2 s_3 s_5. Therefore, the combination 𝒟 = σ∘ t^(i)∘ S maps the original theory to itself up to a discrete gauging of the 1-form symmetry acting on the global variants of the theory. In this way one constructs non-invertible duality defects of 𝒩=2 A_n-1 quivers SCFTs akin to those of <cit.>, for each configuration of punctures invariant under t∘ S. Such configurations of punctures necessarily split in orbits of size four, two and one. Discrete gauging of the one-form symmetry ensures that this duality enhances to a non-invertible symmetry of the field theory.
One can repeat the reasoning replacing S by
ST = ([ 0 -1; 1 1 ]) or S^3T = ([ 0 1; -1 -1 ]) ,
which are respectively of order 6 and 3 in SL(2,ℤ). Both ST and S^3T fix τ=exp(2iπ/3), and they act on the marked points as
ST : p⟼ pexp(-iπ/3) and S^3T : p⟼ pexp(2iπ/3) ,
that is, as a rotations by -π/3 or 2π/3, respectively. As before, one can compose ST and S^3T with appropriate deck transformations, to ensure that points in the standard fundamental cell of E_τ–the parallelogram {0,1,τ,τ+1}–are mapped to points of the same fundamental cell. One finds that the generic orbits for ST are of order 6, and that there is one non-generic orbit of size 3, one of size 2 and one of size 1. This is depicted on the left of <ref>, where as before purple rhombi depict the points whose stabilizers are of order 2, whereas blue triangles and green hexagons correspond to those whose stabilisers are of order 3 and 6, respectively. The corresponding wallpaper group is p6. The generic orbit of size six shown on the left of <ref> is permuted by ST as the cycle (1 3 2 6 4 5).
Conversely, generic orbits for S^3T are of size three, and there are three non-generic orbits of size one and with stabilizer of order 3, depicted as blue triangles on the right of <ref>. The corresponding wallpaper group is p3. The configuration of points shown there splits in two regular orbits: (1 4 2)(3 6 5).
Again combining the action of ST (resp. S^3T) composed with a deck transformation, with an appropriate permutation and a discrete gauging of the one-form symmetry, one ends-up with a comprehensive description of hexality (resp. triality) non-invertible defects for 𝒩=2 A_n-1 quivers SCFTs.
Non-invertible defects of 𝒩=2 D_n quiver SCFTs.
In the case of 𝒩=2 D_n quiver SCFTs, we can apply readily the same method to determine non-invertible defects. Let us discuss the specific example of the 𝒩=2 D_4 quiver SCFT at τ=i, whose conformal manifold is described by configurations of four punctures in the lower half of the fundamental cell E_i, i.e. Im(p_i) ≤ 1/2.
Let p_1,p_2,p_3 and p_4 denote the position of the marked points 1,2,3 and 4 respectively, on the left of <ref>. We gather them in the tuple (p_1,p_2,p_3,p_4). Recall that the position of the i'-th image is determined by the position of the i-th marked point:
p_i' := q_i^-1,-1 = -p_i+1+i .
Under t∘ S one has
(p_1,p_2,p_3,p_4) ⟶ (-ip_1+i,-ip_2+i,-ip_3+i,-ip_4+i) = (p_3,p_4',p_1',p_2) .
We now apply R_D,2 and R_D,3 in order to have all marked points in the desired region of the fundamental cell
R_D,2 R_D,3(p_3,p_4',p_1',p_2) = (p_3,p_4,p_1,p_2) .
Lastly, via a combination of the permutations s_i, one can restore the starting configuration of punctures. In the example of <Ref>, one can take:
s_2s_1s_3s_2: (p_3,p_4,p_1,p_2) ⟶ (p_1,p_2,p_3,p_4) .
All in all, the non-invertible duality defect is obtained as the operation s_2s_1s_3s_2 ∘ R_D,2∘ R_D,3∘ t ∘ S combined with an appropriate discrete gauging of the one-form symmetry.
This procedure generalizes to all 𝒩=2 D_n quiver SCFTs and duality defects, with corresponding wallpaper group p4, or triality defects, with corresponding wallpaper group p6. Note that by construction, S^2 is a symmetry of any D_n configuration of punctures, which implies in particular that triality defects are necessarily of order 6 and not 3.
§.§ Duality group of E_n quiver SCFTs
We have seen that the duality group of 𝒩=2 A_n-1 and D_n quiver SCFTs contains the group of automorphisms of the corresponding affine root system: each coupling constant is naturally associated to a positive simple root of the corresponding affine Lie algebra. The global coupling τ is defined as[Here τ_i are the coupling constants of the single nodes and n_i are the ranks of the nodes in the McKay graph.]
τ = ∑_i n_i τ_i
and corresponds to the imaginary root δ of the affine root system.
The full duality group is generated by the automorphism group of the affine root system together with a copy of SL(2,ℤ) acting on τ as:
S : (τ,τ_0,…,τ_n) ↦(-1/τ,τ_0/τ,…,τ_n-1/τ,τ_n/τ-1-1/τ) ,
T : (τ,τ_0,…,τ_n) ↦ (τ+1,τ_0,…,τ_n-1,τ_n+1) .
Such modular transformations of the parameter τ were argued to exist from the underlying class S construction.
It is natural to conjecture that this analysis extends to the E_6,7,8 quiver gauge theories. These can be constructed as worldvolume theories of D3-branes transverse to ℂ^2/Γ_E_n×ℂ singularities, where Γ_E_n is the corresponding finite subgroup of SU(2).
More precisely, we conjecture that the duality group of these theories is the semi-direct product of SL(2,ℤ) with the affine Weyl group, further centrally coextended by the automorphisms of the affine Lie algebra, acting on τ as in <ref>. From this one can in principle derive the action of the duality group on the global variants of the theory, and hence construct non-invertible defects for suitable configurations of couplings.
Unlike in previous cases, there is no known class
𝒮 realization of these theories—at least to our knowledge—so we lack direct methods to test our arguments. This analysis might actually pave the way for discovering explicit class 𝒮 realizations, or generalizations thereof, of E_n quiver theories.
§ DUALITY GROUP OF THE MASS DEFORMED THEORY
In the previous section, we have constructed the duality group of 𝒩=2 A_n-1 and D_n quiver gauge theories from class 𝒮 arguments. We devote this section to the above quiver theories mass deformed to 𝒩=1, exploiting the class 𝒮 setup while close in spirit to <cit.>.
§.§ Duality group of mass deformed A_n-1
In the following, we revisit the duality group of 𝒩=1 theories obtained as mass deformations of 𝒩=2 A_n-1 quiver gauge theories with gauge group SU(k)^n, along the lines of <cit.>. We will consider mass deformations of the form
Δ W = ∑_i=1^nm_i/2ϕ_i^2 ,
which lead to 𝒩=1 SCFTs <cit.>, whose duality groups are induced by the original 𝒩=2 theory.
The mass deformed theory is specified by the masses (m_1, m_2, …, m_n). As in <ref>, the strategy to construct the duality group consists in uplifting the associated type IIA elliptic model to M-theory, where the theory can be fully described geometrically.
Recall from <ref> that the starting type IIA setup consists of k D4-branes on a circle of radius R_6 intersecting n NS5-branes along the transverse direction. The low energy theory on the D4-branes is a 4d 𝒩=2 A_n-1 quiver gauge theory, where the VEVs of the complex adjoint scalars in 𝒩=2 vector multiplets parameterize the position of the D4-branes along (x^4,x^5). Let:
u := x^4 + i x^5 ,
v := x^7 + i x^8 .
In this setup, the mass deformation we are interested in can be induced by tilting the NS5-branes relatively to each other in the complex (u,v)-plane. More precisely, if two adjacent NS5-branes are not parallel then any displacement of the center of mass of a D4-brane stretched between them changes the minimal length of the D4-segment, therefore the D4 are no longer free to move along the NS5's. From the point of view of the field theory, this means that some flat direction has been lifted.
This lifting can be achieved, in first approximation, by adding mass terms to the adjoint scalars <cit.>.
One can see that, when the relative angle between two adjacent NS5-branes is small, the mass is directly proportional to the angle <cit.>. In this limit, to which we will refer as the limit of small masses, one can identify the small mass with the ones in <ref> <cit.>.
We now proceed with the uplift to M-theory, where we introduce the elliptic curve E_τ, parameterized by the complex coordinate w = x^10 + i x^6:
w ∼ w + q + τℓ , q, ℓ∈ℤ ,
as outlined in <ref>. The k D4-branes become M5-branes wrapping E_τ, whereas the NS5-branes lift to marked points. However, since each NS5-brane corresponds to a specific complex line in the (u,v)-plane, we can assign to them the point [u_i:v_i] ∈ℂℙ^1 which corresponds to the latter.[For example, the original 𝒩=2 case in which all NS5-branes extend along u is given by [u_i:v_i]=[1:0] ∀ i.] Thus, the M-theory uplift is encoded in the data of (p_i,[u_i:v_i])∈ E_τ×ℂℙ^1.
In the limit of small masses, the lines of homogeneous coordinates [u_i:v_i] are all close to [1:0], i.e. they are all in the complex chart ℂ≃{u≠ 0}⊂ℂℙ^1 and one can always rewrite [u_i:v_i]=[1:z_i] with z_i∈ℂ, where now the small mass limit can be formally expressed as z_i → 0 ∀ i. As a consequence of supersymmetry, since the superpotential needs to be holomorphic in the fields, the mass parameters m_i must be holomorphic in z_i and vanish when all branes are parallel, i.e. when z_i approaches z_i+1 <cit.>. This implies:
m_i = z_i+1 - z_i , z_i ∼ z_i+n .
Tilting the n NS5-branes in the type IIA setup yields only n-1 relative angles, which is at odds with the n mass parameters one can define in field theory. Equivalently, given the definition in <ref>, the masses satisfy ∑ m_i = 0. This apparent paradox can be tackled similarly to what is done in <cit.> by considering a non-trivial ℂℙ^1-fibration over the elliptic curve E_τ. Indeed, as we will see shortly, if we consider the z_i as sections of a non-trivial fibration over E_τ, we can recover the missing mass deformation in term of a “global mass", m = ∑ m_i, which vanishes precisely when the fibration trivializes.
Let us denote R ("Right") and U ("Up") the topologically non-trivial cycles of E_τ corresponding to w→ w+1 and w→ w+τ, respectively. Saying that ℂℙ^1 is fibered non-trivially over E_τ means that there can be non-trivial (projective) monodromies along R and U, in the form of matrices in GL(2,ℂ) acting projectively on the fiber coordinate:
z →a z + b/c z + d .
In order for the fibration to preserve 𝒩=1 supersymmetry, the monodromy along either cycles of the torus must preserve the holomorphic 3-form Ω = du ∧ dv ∧ dw, and this implies that the monodromies actually live in SL(2,ℂ):
M = (
a b
c d
) , a d - b c = 1 .
In field theory the mass deformation can be continuously turned off, which implies that the fibration must be topologically trivial. It is is then entirely described by the monodromies M_R and M_U associated to the cycles R and U of E_τ, which provide a representation of the fundamental group of E_τ. Since π_1(E_τ) is abelian, the matrices M_R and M_U must commute, as we would have intuitively expected.
In the limit of small masses, one can approximate the projective bundle with an affine fibration over E_τ. From field theory one expects that the monodromies in SL(2,ℂ) act as shifts z_i → z_i + b in this limit. Note that when z is small:
a z + b/c z + d≃a d - b c/d^2 z + b/d + 𝒪(z^2) = 1/d^2 z + b/d + 𝒪(z^2) ,
thus it must be the case that d=1. The affine shift is non-trivial when b ≠ 0, which we now assume. The monodromies M ∈SL(2,ℂ) must therefore be of the form:
M = (
a b
a-1/b 1
) .
One can check that two such generic matrices commute if and only if a=1, thus we define:
M_R = (
1 b_R
0 1
) ,
M_U = (
1 b_U
0 1
) ,
where b_R and b_U are complex numbers characterizing the fibration.[Our solution is slightly different from the one in <cit.>, but it makes more explicit the relation between M_R,U and the shifts z→ z+constant.] The monodromies induce the following transformations:
(τ, b_R, b_U, p_i, z_i) R⟶ (τ, b_R, b_U, p_i + 1, z_i + b_R) ,
(τ, b_R, b_U, p_i, z_i) U⟶ (τ, b_R, b_U, p_i + τ, z_i + b_U) .
Allowing the affine fibration over E_τ to be non-trivial introduces the freedom to change the fiber coordinate z by (p,z)→ (p, z + λ p), such that the shifts of the monodromies in <ref> are preserved. This “gauge" symmetry acts[This action is recovered by asking the action of M_R,U on the parameter to match before and after the gauge fixing.] on the parameters of the M-theory setup as:
(τ, b_R, b_U, p_i, z_i) f⟶ (τ, b_R + λ, b_U + τλ, p_i, z_i + λ p_i) .
One can fix the gauge by imposing M_R to be trivial, i.e. b_R=-λ and we can define
b_U - b_R τ = m
to be the global mass.
To conclude, in M-theory the setup is fully specified by the tuple
( τ, 0, m, p_i, z_i) ,
where m and the z_i can be traded for n masses m_i:
m_i = z_i+i - z_i , i = 1, … , n-1 ,
m_n = m + z_1 - z_n ,
m = ∑_i=0^n-1 m_i .
The above equations show how a non-trivial fibration allows for a tilted configuration with only one mass term, say m=z_n. This can also be understood as a consequence of <ref>, where we defined m_n = z_1-z_n. When we have a non-trivial fibration, the difference between z_1 and z_n is computed across the fundamental cell of the torus, thus we should consider not z_1, but M_U(z_1)=z_1+m and hence the definition in <ref>.
We see that the mass deformed theory is now fully specified by the vector m⃗=( m; m_1, … , m_n). However, this set of variables depends of the gauge fixing and it is not preserved by S, which exchanges the R and U cycles and consequently b_R and b_U. Explicitly:
(τ, 0, m, p_i, z_i) S⟶(-1/τ, m, 0, p_i/τ, z_i ) .
Therefore the action of S needs to be followed by another f-gauge fixing with λ = - m, leading to:
(τ, 0, m, p_i, z_i) f ∘ S⟶(-1/τ, 0, m/τ, p_i/τ, z_i - m p_i/τ) .
From now on we assume that S is always post-composed with a suitable f-gauge fixing.
We are now set to describe the duality group of these 𝒩=1 theories. By construction, the action of T and t_i^(1) is trivial on the z_i, whereas they act on p_i and τ as in <ref>. The generator t_i^(τ) moves the punctures along U, thus:
(τ, 0, m, p_i, z_i) t_i^(τ)⟶ (τ, 0, m, p_i + τ, z_i + m) .
Last, permutations s_i exchange z_i and z_i+1 as well as p_i and p_i+1.
With the masses defined as in <ref>, we can write the action of the generators { S, T, t_i^(1), t_i^(τ) , s_i } of the duality group on the masses as
S : m⃗ → ( m/τ ; m_1 - m τ_1/τ, … , m_i - m τ_i/τ , … , m_n - m τ_n/τ + m/τ) ,
t_i^(τ) : m⃗ → ( m; m_1 , … , m_i-1 + m , m_i - m, m_i+1, … , m_n) ,
s_i : m⃗ → ( m; m_1, … , m_i-1 + m_i , - m_i, m_i+1 + m_i , … , m_n) ,
while the remaining generators T and t_i^(1) act trivially on m⃗.
We conclude this section by remarking that the duality group of the mass-deformed theory can be presented similarly to the duality group of the original 𝒩=2 theory. The masses behave as roots of the affine A_n-1 algebra, with the global mass playing the role of the imaginary root. Therefore, the duality group is the extension of the automorphism group of the A_n-1 algebra by the modular group SL(2,ℤ), acting as in <ref>, while acting as well on the couplings τ_i as in <ref>.
§.§ Duality group of mass deformed D_n
We now address the mass deformations of the D_n theory by a superpotential of the form <ref>, with the masses defining the deformation gathered in a vector m⃗=(m; m_0,…,m_n). As reviewed in <ref>, this theory admits a type IIA construction in terms of D4s suspended between NS5s, in presence of orientifold ONS5^--planes, <ref>. In terms of branes, mass deformations are obtained as in <ref> by tilting the NS5 branes in the ^2_u,v plane. The lift to M-theory then fully unveils the duality group of the deformed theory.
Each NS5-brane corresponds to a complex line in the ^2_u,v plane, and hence to a point [u_i : v_i] ∈ℂℙ^1. In the limit of small angles, one can assume that the slope of these lines is close to [1:0], thus one can set [u_i : v_i]=[1 : z_i] where z_i ∈ℂ. The masses can then be expressed in term of the z_i, as in <ref>. The main difference with respect to A_n-1 theories comes from the orientifold projection: since it maps x^7,8 to -x^7,8, tilting an NS5 brane by a complex number z amounts to tilting its image with respect to the ONS5^--planes by -z, at least when the ^2_u,v plane is trivially fibered over the x^6-segment; this is depicted in <ref> (which builds on the previous <ref>) for the brane closest to the leftmost ONS5^--plane, and its image.
Similarly to what was done above in A_n-1 theories, the masses are expressed in terms of the z_i as:
m_0 = z_2-(-z_1) ,
m_i = z_i+1-z_i , i=1, … , n-1 ,
m_n = +(-z_n)-z_n-1 ,
where we kept the - signs coming from the orientifold projection since they will be relevant in the following. This definition is consistent with the requirement of holomorphy in the z_i's and vanishing of all masses when the branes are parallel.
The definition in <ref> leads to a vanishing total mass:
m=m_0+m_1+m_n-1+m_n+2 ∑_i=2^n-2 m_i=0 ,
hence as in A_n-1 theories it naively seems that there is a missing mass parameter in the brane setup, as compared to the adjoint masses appearing of field theory. Here again, the mismatch is resolved by considering slightly more general brane setups in which ℂℙ^1_u,v is allowed to fiber non-trivially over the M-theory pillowcase E_τ/ℤ_2.
Such a fibration is specified by a representation of the fundamental group π_1(E_τ/ℤ_2) into GL(2,ℂ). Because of the way the orientifolds act on the coordinates x^7,8, in order to preserve 𝒩=1 supersymmetry the generators R_A, R_B, R_C and R_D must correspond to matrices of determinant -1. Moreover, in the limit of small z_i, one expects the monodromies corresponding to R_l, l=A,B,C,D, to act as z_i→ -z_i + b_l, where the b_l are i-independent complex numbers. Recalling that the R_l are involutions, the form of such elementary monodromies is constrained to be:
M_l = (
a_l b_l
1-a_l^2/b_l -a_l
) , b_l ≠ 0 , l = A, B, C, D ,
and the other relations eventually yield:
M_l = (
-1 b_l
0 1
) , b_l ≠ 0 , l = A, B, C, D
b_B - b_A = b_D - b_C .
As in <ref>, allowing non-trivial fibrations over E_τ/ℤ_2 introduces an additional freedom in the choice of fiber coordinate. One can do the redefinition z→ z+f(p) where p∈ E_τ/ℤ_2 and with f a holomorphic function, however only affine functions f(p)=λ p +κ preserve the form of the monodromies. Such a coordinate change induces the following transformation:
(τ, b_A,b_B,b_C,b_D, p_i, z_i) f⟶ (τ, b_A , b_B + λ ,b_C + τλ ,b_D + (1+τ)λ, p_i, z_i + λ p_i) .
With the physical interpretation in mind, we can fix λ in such a way that:
b_B - b_A + λ = b_D - b_C + λ = 0 ,
This makes clear that up to redefinition of the fiber coordinate, the fibration depends only on the two parameters b_A and b_E = b_C + (b_A - b_B) τ. With the notation of above, the fibration is defined by the data:
(τ, b_A , b_E , p_i , z_i ) ,
on which the monodromies act as follows:
(τ, b_A, b_E, p_i , z_i ) R_A⟶ (τ, b_A, b_E, -p_i, - z_i + b_A ) ,
(τ, b_A, b_E, p_i , z_i ) R_B⟶ (τ, b_A, b_E, - p_i + 1 , - z_i + b_A ) ,
(τ, b_A, b_E, p_i , z_i ) R_C⟶ (τ, b_A, b_E, - p_i + τ , - z_i + b_E ) ,
(τ, b_A, b_E, p_i , z_i ) R_D⟶ (τ, b_A, b_E, - p_i + τ + 1 , - z_i + b_E ) .
In <ref>, we stressed that the tilting of a brane and its image with respect to an orientifold plane are not independent. In M-theory, this amounts to saying that given the tilting z_i of a brane, the tilting of its image with respect to a fixed point of the involution ℤ_2 is encoded in the image of z_i by the monodromy M_l around that fixed point. In general, the fibration is not trivial, and:
M_l(z_i) = - z_i + b_l .
This leads to the following generalization of <ref>:
m_0 = z_2 - M_A/B(z_1) = z_2 + z_1 - b_A ,
m_i = z_i+1-z_i , i=1, … , n-1
m_n = M_C/D(z_n)-z_n-1 = - z_n -z_n-1 + b_E .
<Ref> corresponds to a trivial fibration, for which b_A=b_E=0.
The corresponding global mass reads
m = ∑_i n_i m_i = b_E - b_A
with the n_i the Dynkin labels of affine D_n, as in <ref>. Note that as expected, the global mass m vanishes when the fibration is trivial.
We have thus shown that the theories obtained by deforming 𝒩=2 D_n quiver SCFTs by 𝒩=1 preserving masses are fully determined by the set of gauge couplings τ_i satisfying τ=∑_i n_i τ_i, and the set of adjoint masses m_i with m=∑_i n_i m_i. Though the set of couplings and the set of masses play very similar roles in the geometric description of the theories we are interested in, there is an important difference in the way the mapping class group acts on them. Its action on the couplings is given in <ref>, whereas the one on the masses is described as follows.
First of all one can note that the action of T is trivial, whereas S[Composed with a redefinition of the fiber coordinate for the same reason as in <ref>.] acts as
m⃗S⟶( m/τ ; m_0 - m τ_0/τ, … , m_i - m τ_i/τ , … , m_n - m τ_n/τ + m/τ) .
Deck transformations act on the masses as in <ref> with the m_i in place of the τ_i, and with the shifts of the z_i due to the non-trivial fibration taken into account. For example, R_I,i maps
m⃗ to:
(m;m_1+δ_C|D,Im,m_0+δ_C|D,Im,m_2,…,m_n) (i=1),
(m;-m_1+δ_C|D,Im,-m_0+δ_C|D,Im,P(2)-m_2-δ_C|D,Im,…,m_n) (i=2),
(m;m_0,…,-P(i-2)-m_i-1+δ_C|D,Im,P(i-1)+m_i-δ_C|D,Im,…,m_n) (3≤ i≤ n-2),
(m; m_0,…,-P(n-2)+m_n-2+δ_C|D,Im,δ_A|B,Im-m_n,δ_A|B,Im-m_n-1) (i=n-1),
(m; m_0,…,m_n+δ_A|B,Im,m_n-1+δ_A|B,Im) (i=n) ,
where I=A,B,C,D, where C|D in δ_C|D,I means either C or D, and with:
P(i)=∑_k=0^in_k m_k .
Finally the transposition s_i maps the mass vector m⃗ to:
(m; m_0 , - m_1 , m_2 + m_1 , m_3 , … , m_n) (i=1),
(m;m_0 + m_2 , m_1 + m_2 , - m_2 , m_3 + m_2 , m_4 , … , m_n) (i=2),
(m; m_0, … , m_i-1 + m_i , - m_i , m_i+1 + m_i , m_i+2 , … m_n) (3≤ i≤ n-3),
(m; m_0, … , m_n-4 , m_n-3 + m_n-2 , - m_n-2 , m_n-1 + m_n-2 , m_n + m_n-2 ) (i=n-2),
(m; m_0, … , m_n-3 , m_n-2 + m_n-1 , - m_n-1 , m_n ) (i=n-1).
This concludes our analysis of the duality group's action on the masses that define the deformation of D_n quivers.
§ MODULI SPACE OF THE MASS DEFORMED THEORY
We have seen in the previous section how the respective MCGs of the A_n-1 and D_n N=2 SCFTs act on the mass parameters that one can turn on. Such relevant mass deformations break supersymmetry to N=1 and trigger an RG flow. In the present section, we ask in all generality what is the moduli space of the theory that is the result of this RG flow. As we will see, such moduli spaces describe geometries which are often a non-trivial fibrations of the geometry described by the N=2 moduli space.
§.§ Moduli space of mass deformed A_n-1
We are interested in A_n-1 quiver theories deformed by 𝒩=1 preserving masses, that is:
Δ W = ∑_i=1^n m_i/2ϕ_i^2 ,
where the ϕ_i denote the adjoint scalars in the 𝒩=2 vector superfields.
On general grounds, one expects that the deformed theories flow to interacting 𝒩=1 SCFTs <cit.>. One of the simplest examples is the conifold field theory, which is the mass deformation of the 𝒩=2 A_1 quiver gauge theory with mass parameters (m_1,m_2)=(m,-m) <cit.>. Other SCFTs of interest can be obtained from other choices of masses; for example, the Pilch–Warner (PW) point <cit.> is also obtained from the 𝒩=2 A_1 quiver gauge theory, though with the choice of deformation parameters (m_1,m_2)=(m,m). The moduli space of the former is given by the locus xy=zw in ℂ^4, while the latter's one is the two-fold ℂ^2/ℤ_2. The general description of the moduli space of 𝒩=1 deformations of 𝒩=2 quiver gauge theories that we are going to present, will in some cases allow us to argue directly that these theories flow to interacting 𝒩=1 SCFTs.
We first consider general mass deformations of the A_n-1 quiver gauge theory, with gauge group U(1)^n.[In the general case where the gauge group is SU(k)^n for some k, the moduli space is generically the k-th symmetric product of the abelian one, hence the customary simplification when discussing the moduli space.] The deformed superpotential reads:
W_𝒩=1 = ∑_i=1^nϕ_i ( X_i,i+1X_i+1,i - X_i,i-1X_i-1,i) + ∑_i=1^nm_i/2ϕ_i^2 ,
where i is understood modulo n. Let
x = ∏_i=1^n X_i,i+1 , y = ∏_i=1^n X_i,i-1 ,
w_i = X_i,i-1X_i-1,i ∀ i ,
u_i = ϕ_i ∀ i ,
be the elementary gauge invariant operators. They are constrained by F-terms equations, which read
X_i,i+1X_i+1,i - X_i,i-1X_i-1,i + m_i ϕ_i = 0 ,
X_i+1,i ϕ_i - ϕ_i+1 X_i+1,i = 0 ,
for all i=1,…,n. These lead to the relations:
x y = ∏_k=1^n w_k ,
u_i = u ,
w_i+1 - w_i = - m_i u ,
again for all i=1,…,n.
The w_i's can be written recursively as
w_i = w_1 - ( ∑_k=1^i-1 m_k) u = w_1 - t_i u ,
and, since by definition w_n+1 = w_1, we have the constraint
( ∑_i=1^n m_i ) u = m u = 0 ,
where m denotes the “global mass".
Therefore, denoting w_1 = w, the moduli space of the deformed theory is defined by the equations
x y = ∏_k=1^n( w - t_k u)
m u = 0 .
Note that the second equation imposes either u=0 or m=0.
If m≠ 0, then u=0 and the moduli space is defined by
w_i = w ∀ i , x y = w^n ,
i.e. it is the 2-fold ℂ^2/ℤ_n. This generalizes the case of the PW fixed point. If rather m=0, the moduli space is a 3-fold determined by the partial sums t_k. This is analogous to the case for the conifold theory.
From this analysis, we see that the moduli space of the deformed theory is either a two- or a three-fold singularity. In particular, the former is a Du Val singularity of type A_n-1, while the latter is a compound Du Val, again of type A_n-1, i.e. xy=w^n + u g(w,u),[In general, a compound Du Val three-fold is given by the equation f_Du Val(x,y,w) + u g(w,u)=0 in ℂ^4.] for some polynomial g(w,u). The two-fold case is less explored in the literature, only for specific examples there are results showing that the mass deformation we are interested in leads to an interacting SCFTs <cit.>. On the other hand, in the three-fold case, it has been proven that the deformations under consideration always lead to interacting SCFTs <cit.>.
Finally, let us give a different perspective on the IR moduli space in <ref>. In <cit.>, a graphical tool called “bug calculus” is exploited in order to deform the algebraic curves of ADE singularities with FI terms b_i, associated to each node of the extended Dynkin diagram. The singularity is deformed by a versal deformation that depends on the FI parameters b_i. In the case of A_n-1 quiver, one finds
x y = ∏_k=1^n[ w - ( ∑_j=1^k+1 b_j ) ] ,
with the condition b_1+…+b_n = 0, which closely resembles <ref>.
The F-terms equations, after mass deformation, have the same form of the gauge invariants constructed in <cit.>, provided the correspondence b_i ↔ m_i ϕ_i. A formal correspondence can be established if we deform the 𝒩=2 superpotential with complex FI terms,
W_𝒩=1 = ∑_i=1^nϕ_i ( X_i,i+1X_i+1,i - X_i,i-1X_i-1,i) + ∑_i=1^n b_i ϕ_i .
After applying the “bug calculus" procedure, we can now trade the b_i for m_i ϕ_i to get to
b_i ↦ m_i ϕ_i ,
∑_i=1^n b_i = 0 ↦∑_i=1^n m_i ϕ_i = 0 .
As discussed above, the F-terms requires ϕ_i = u, thereby obtaining the condition m u=0.
This approach will be used in the next section to get the moduli space of deformed D_n quiver theories.
§.§ Moduli space of mass deformed D_n
The analysis of the previous section can be repeated for the D_n theory.[The same can also be done for E_n-quivers, the resulting moduli space is either ℂ^2/Γ_E_n, for non-vanishing global mass, or a compound Du Val of type E.] Let us start by considering the superpotential for the D_n theory with the addition of masses for the adjoint fields
W_𝒩=1 = ∑_i=0,1ϕ_i X_i,2 X_2,i + ∑_j=n-1,nϕ_j X_j,n-2 X_n-2,j + ϕ_2 ( X_2,0 X_0,2 + X_2,1 X_1,2 + X_23 X_32)
+ ϕ_n-2( X_n-2,n-1 X_n-1,n-2 + X_n-2,n X_n,n-2 - X_n-2,n-3X_n-3,n-2)
+ ∑_l=3^n-3ϕ_l ( X_l,l+1X_l+1,l - X_l,l-1X_l-1,l) +∑_i=0^nm_i/2ϕ_i^2 ,
where we refer to <ref> for the index conventions of the fields.
As before, we start by considering a theory with abelian gauge factors for the external nodes and U(2) for the internal ones. Analogously to the previous section, the moduli space of the non-abelian theory can be recovered from the symmetry product of k copies the this moduli space.
The F-terms for the chiral fields are
ϕ_2 X_2,i = - X_2,iϕ_i , i = 0, 1 ,
X_i,2ϕ_2 = - ϕ_i X_i,2 , i = 0, 1 ,
ϕ_n-2 X_n-2,j = - X_n-2,jϕ_j , j = n-1, n ,
X_j,n-2ϕ_n-2 = - ϕ_j X_j,n-2 , j = n-1, n ,
ϕ_l X_l,l+1 = X_l,l+1ϕ_l+1 , l = 2, … , n-2 ,
X_l+1,lϕ_l = ϕ_l+1 X_l+1,l , l = 2, … , n-2 ,
while for the adjoint fields we have
X_i,2 X_2,i = - m_i ϕ_i , i = 0, 1 ,
X_j,n-2 X_n-2,j = - m_j ϕ_j , j = n-1, n ,
X_2,0 X_0,2 + X_2,1 X_1,2 + X_23X_32 = - m_2 ϕ_2 ,
X_n-2,n-1 X_n-1,n-2 + X_n-2,n X_n,n-2 - X_n-2,n-3X_n-3,n-2 = - m_n-2ϕ_n-2 ,
X_l,l+1X_l+1,l - X_l,l-1X_l-1,l = - m_l ϕ_l , l = 3, … , n-3 .
One can solve the F-terms or employ the graphical computational technique described in <cit.>. The detailed computation is given in <ref>, whereas here we limit ourselves to a summary of the salient points.
First of all, as a consequence of <ref>, we have that ϕ_0,1,n-1,n = u, for all external nodes and ϕ_i=2,…,n-2=diag(-u,-u), after using the gauge freedom to diagonalize the fields of the internal nodes. Second, from <ref>, dubbing w_i,j=(X_i,jX_j,i)=w_j,i, we have the following constraints
w_0,2 = - m_0 u , w_1,2 = - m_1 u , w_n-1,n-2 = - m_n-1 u , w_n,n-2 = - m_n u ,
w_2,3 = (2 m_2 + m_0 + m_1)u ,
w_n-2,n-3 = - (2 m_n-2 + m_n-1 + m_n)u ,
w_l,l+1 - w_l,l-1 = 2 m_l u , l = 3, … , n-3 .
Finally, by taking the sum of <ref>, one then gets the following condition
u ( m_0 + m_1 + ∑_l=2^n-2 2 m_l + m_n-1 + m_n ) = u m = 0 ,
while the other gauge invariants constructed out of the chiral fields lead to the algebraic curve describing the moduli space, see <cit.>.
As in the A_n-1 case, we see that there are two possibilities: either the global mass does not vanish, leading to a 2-fold moduli space, or it does and the moduli space is a 3-fold. In the former case, the moduli space is just the Du Val singularity corresponding to D_n, while in the latter it is a compound Du Val described by the following equation in ℂ^4
x^2 + y^2 w + β y u^n = w^-1[∏_k=1^n(w + u^2 t_k^2 ) - ∏_k=1^n u^2 t_k^2] ,
where the t_k are given by
t_1 = 1/2( m_0 - m_1 ) ,
t_2 = 1/2( m_n - m_n-1) ,
t_3 = 1/2( m_0 + m_1 ) ,
t_4 = 1/2( m_0 + m_1 ) - m_2 , …, t_n = 1/2( m_0 + m_1 ) - ∑_l=2^n-2 m_l ,
and
β = - 2 ∏_k=1^n t_k .
Contrary to the previous case, less is known about the existence of local CY metrics on these spaces and thus only the field theory analysis is accessible to study the conformality of the deformed theory. While we leave a detailed analysis to future works, but assuming that the deformed theory flows to an interacting SCFT, we can still analyse the duality group inherited by the deformed theory from the parent one. In particular, we will show that requiring to preserve some duality symmetries of the starting theory, constrains the moduli space of the deformed one.[One could investigate whether the deformations of Du Val singularities that preserve non-invertible duality defects can be characterized geometrically. Exploring the interplay with deformations that maintain toricity in the A_n-1 case could lead to deep insights, given the extensive techniques available for studying toric affine Calabi–Yau threefolds and their corresponding 𝒩=1 gauge theories.]
§ MASS DEFORMATIONS PRESERVING NON-INVERTIBLE SYMMETRIES
In <ref> we have characterized the locus in the conformal manifolds of 𝒩=2 A_n and D_n quiver SCFTs at which these theories admit non-invertible duality defects: it is the locus which corresponds to symmetric configurations of punctures on the M-theory torus. The study of relevant deformations preserving 𝒩=1 supersymmetry and such non-invertible defects has recently been pioneered in <cit.>. Our goal here is to provide a general method to characterize which mass deformations of 𝒩=2 A_n and D_n quivers preserve non-invertible duality, triality or hexality defects. In particular, we derive the dimension of the space of mass deformations which preserve non-invertible defects. In some cases, such deformations lead to known 𝒩=1 SCFTs with moduli spaces Calabi–Yau threefolds, generalizing the deformation of the 𝒩=2 A_1 quiver to the conifold SCFT.
The supercharges Q of 4d 𝒩=4 theories transform non-trivially under the SL(2,ℤ) duality group <cit.>. More precisely, they transform as Q →exp(-i β) Q. At self-dual values of the coupling τ, it turns out that β = π i / q, where q is the order of the stabilizer of τ in SL(2,ℤ). In 𝒩=1 superspace, a mass deformation takes the form
Δ S_W = ∫d^2 θ m/2Φ_i^2 .
Such a deformation preserves the duality symmetry if and only if the transformation of the measure d^2 θ→exp(-2i β)d^2 θ can be reabsorbed by a transformation of the chiral fields Φ_i. This can be done using the R-symmetry <cit.>, if the masses (m_1,…,m_n) defining the deformation satisfy
(m_1',…,m_n') = e^iα (m_1,…,m_n) ,
where (m_1',…,m_n') is the image of (m_1,…,m_n) under the duality transformation at hand.
We have seen that dualities sometimes act non trivially on the deformation masses, which makes the analysis more involved. This happens when the global mass does not vanish; therefore, in what follows we distinguish the cases with vanishing global mass with the ones where the global mass is non-zero.
§.§ Vanishing global mass
When the global mass m vanishes, the action of SL(2,ℤ) and deck transformations on the z_i's is trivial. This means that, while the punctures p_i are rotated to p_i', cf. <ref>, the image p_i' of each p_i is still associated with the original z_i. Consider a configuration of n punctures invariant under a duality transformation of order q, as discussed in <ref>, schematically 𝒟 = σ∘ t ∘ S/T is a composition of permutation, deck and SL(2,ℤ) operations. 𝒟 acts trivially on p⃗ by construction, while only σ acts on z⃗
𝒟((p_1,z_1),(p_2,z_2),…,(p_n,z_n)) = ((p_1,z_σ(1)),(p_2,z_σ(2)),…,(p_n,z_σ(n))) .
Therefore, for the masses to preserve the duality symmetry, they must solve the eigenvalue problem
𝒟 m⃗ = e^i α m⃗ ,
where 𝒟 acts on the masses only through the subgroup of permutations in the whole duality group. The n `decorated' punctures (p_i,z_i) split into orbits under 𝒟 of size a divisor of the order of 𝒟. For example, under 𝒟 = σ∘ t ∘ S at τ=i, the punctures always split into O_4 orbits of size 4, O_2≤ 1 orbits of size 2 and O_1≤ 2 orbits of size 1,[Note that some configurations require that one or more gauge groups have infinite coupling even if punctures are distinct, for example when O_2=O_1=1.] with a total number of distinct orbits O_tot = O_1 + O_2 + O_4.
The total number of solutions to <ref> can be explicitly given in terms of the number of orbits. Let q be the order of 𝒟, thus 𝒟^q=1 implies that the phase e^i α is a q^th root of unity. Then, the total number of independent mass deformations satisfying <ref> is given by
* If e^iα≠ 1, there is one deformation for each orbit of size k such that e^ikα=1. Hence the total of independent deformations is ∑_k O_k.
* If e^iα=1, then the number of independent deformations is O_tot-1.
We refer the reader to <ref> for a complete and detailed proof of this result.
The space of solutions represent all of the possible relevant deformations of the 𝒩=2 theory that preserve the non-invertible duality symmetry, and distinct solutions flow in principle to distinct 𝒩=1 SCFTs. The moduli space of the IR theories is then specified by each mass deformation, as discussed in <ref>.
We dedicate the rest of this section to explicit examples.
Mass deformed A_3 quiver theory
We consider the configuration shown in <ref>. At τ=i, 𝒟 = σ∘ t ∘ S defines a non-invertible duality defect, where t=∏_i t^τ_i and σ= s_2 s_1 s_3. If one then turns on mass deformations in such a way that the global mass vanishes, 𝒟 acts on the masses as
(m_1',m_2',m_3',m_4') =(z_2'-z_1',z_3'-z_2',z_4'-z_3',z_1'-z_4')
=(z_4-z_2,z_1-z_4,z_3-z_1,z_2-z_3)
=(m_2+m_3,m_4,m_1+m_2,-m_2) .
The condition on the masses to preserve the non-invertible duality defect reads
(m_1',m_2',m_3',m_4')=e^i α(m_1,m_2,m_3,m_4) ,
with solutions:
(m_1, …, m_4) = (1,0,-1,0) m_1 for α=π ,
(m_1, …, m_4) = (-i-1/2,1,-i-1/2,i ) m_2 for α=π/2 ,
(m_1, …, m_4) = (i-1/2,1,i-1/2,-i ) m_2 for α=-π/2 .
In other words, inside the (complex) 3-dimensional space of mass deformations for the 𝒩=2 A_3 SCFT, the subspace of deformations preserving non-invertible duality defects is one-dimensional. More precisely, it is the union of three lines.
As discussed in <ref>, these three mass eigenvectors determine the moduli space of the IR SCFT at the end of the RG flow. For α=π we have
xy = w^2 ( w - m_1 u )^2 ,
which is the equation of the toric L^2,2,2 singularity. In contrast, for α=±π/2 the moduli space is defined by
xy = w^4 - ( m_2 u/2)^4 .
Mass deformed A_5 quiver theory We consider the S-self dual configuration of 6 punctures on the M-theory torus E_i shown in <ref>. Note that the six punctures split into a generic orbit of size four (2 5 6 3), and a non-generic one of size two (1 4). The combination 𝒟=σ∘ t ∘ S, where
t=∏_i=1^6 t_i^(τ) and σ = s_3 s_2 s_1 s_4 s_3 s_2 s_3 s_5 ,
acts on the masses as
(m_1,m_2,m_3,m_4,m_5,m_6) → (-m_3, m_3 + m_4 + m_5, m_6, m_1, m_2 + m_3 + m_4, -m_4) .
Solving the eigenvalue equation 𝒟m⃗ = e^iαm⃗ leads to the following five solutions
[ α=π (1,0,1,-1,0,-1) m_1 xy = w^4 [ w^2 - (u m_1)^2]; α=π (0,1,0,0,-1,0 ) m_2 xy = w^3 ( w - u m_2)^3; α=π/2 (1,i-1,-i,-i,i-1,1 ) m_1 xy = w^2 [ w^4 - (u m_1)^4]; α=-π/2 (1,-i-1,i,i,-i-1,1 ) m_1 xy = w^2 [ w^4 - (u m_1)^4]; α=0 (1,0,-1,1,0,-1) m_1 xy = w^2 ( w - u m_1)^4 . ]
In accordance with the general analysis, we find that there are five independent mass deformation, where only two of them can be turned on simultaneously, i.e. the ones associated to α=π. The moduli space of the second and last solutions are toric singularities usually referred to as L^3,3,3 and L^2,4,2, respectively.
We give more examples and details in <ref>.
Mass deformed D_4 quiver theory The same analysis applies to the D_n case. For instance, let us consider D_4, which requires 8 marked points to be placed on the torus 𝕋^2, and organize them in two orbits of size four. In <ref>, we showed that this configuration leads to a theory with a duality defect 𝒟 = R_D,4∘ R_D,2∘σ∘ t^(i)∘ S, with σ = s_1 s_3 and[Recall that for D_n, t_k^(τ) = R_C,k∘ R_A,k, acting on both marked points and images.] t^(i) = ∏_k=1^4 t_k^(i). Under 𝒟, the masses transform as
𝒟 : (m_0,m_1,m_2,m_3,m_4) → (m_1, -m_0, m_0 + m_2 + m_3, m_4, -m_3) .
The solutions of the eigenvalues equation 𝒟m⃗ = e^i αm⃗ for vanishing global mass are
[ α = π/2 (i,-1,0,-i,1) m_4 x^2 + y^2 w = w^3 + 1/2(m_4 u )^4 (y+w); α = π/2 (i-1,-i-1,1,0,0) m_2 x^2 + y^2 w = [ w^2 - (m_2 u)^4 ] ( w + 4 m_2 u ); α = -π/2 (-i,-1,0,i,1) m_4 x^2 + y^2 w = w^3 + 1/2(m_4 u )^4 (y+w); α = -π/2 (-i-1,i-1,1,0,0) m_2 x^2 + y^2 w = [ w^2 - (m_2 u)^4 ] ( w + 4 m_2 u ) . ]
§.§ Non-vanishing global mass
The case of non-vanishing global mass is more involved, since now 𝒟 acts non-trivially on the z_i. However, the punchline is the same. One applies the transformation 𝒟 to the masses using <ref> and looks for an eigenvector of masses with the further constraints that the global mass change as m' = m/τ, which forces the phase to be e^iα=1/τ.
We can again prove in this case that a solution to the eigenvalue problem 𝒟m⃗ = e^i αm⃗ always exists. To this end, we need to prove that there is at least one eigenvector with eigenvalue 1/τ, as required by the transformation properties of the global mass. One can check explicitly that in this case the Dynkin vector n⃗ is a left eigenvector with eigenvalue 1/τ.[The vector n⃗ is a left eigenvector with eigenvalue 1 for both the t_i^τ/R_I,i and s_i transformations, and it has eigenvalue 1/τ for the modular transformation S.] Since right and left eigenvectors of an automorphism form a basis for a vector space and its dual respectively, we have that at least one right eigenvector m⃗ exists, with eigenvalue 1/τ, such that m = n⃗·m⃗≠ 0. This proves there is always a mass deformation, with non-vanishing global mass, that preserves the duality symmetry. Moreover, for each other right eigenvector with eigenvalue 1/τ, we have an extra dimension in the space of solutions to <ref>.[Contrary to the case of non-vanishing global mass, we do not have a general formula for the dimension of this space.]
As an example, consider A_3 and the configuration of <ref>, but this time in the presence of a global mass. In order to the duality defect 𝒟 = σ∘ t ∘ S, the masses need to transform as
m_2 + m_3 - m τ_1 = -i m_1 ,
m_4 - m [ τ_1(i-1) + 1 ] = - i m_2 ,
m_1 + m_2 - m τ_1 = -i m_3 ,
-m_2 + m [ τ_1 (i+1)-i ] = -i m_4 ,
and there are two independent mass deformations with 𝒟:
( m_1 , m_2, m_3, m_4 ) = ( 0 , τ_1/1 - τ_1, 0, 1 ) m_4 , m = m_4/1-τ_1 ,
( m_1 , m_2, m_3, m_4 ) = ( 1, 2 τ_1 - i + 1/1 - τ_1, 1, 0 ) m_1 , m = m_1 1-i/1-τ_1 ,
and the moduli space of these IR SCFTs is a 2-fold, as discussed in <ref>.
In the mass deformed A_5 theory with global mass, the condition to preserve 𝒟 is
-m_3 = -i m_1 + m (τ_1 - i) ,
m_3 + m_4 + m_5 = - i m_2 + m [ τ_1 (i-1) + i+1/2] ,
m_6 = -i m_3 - m i τ_1 ,
m_1 = -i m_4 - m i τ_1 ,
m_2 + m_3 + m_4 = -i m_5 + m [ τ_1 (i-1) + i+1/2] ,
- m_4 = - i m_6 + m ( τ_1 + i ) ,
which has two eigenvectors with eigenvalue α=-π/2
( 2τ_1 -i/2 τ_1 + i +2 , 0 , -2τ_1 + 2i - 1/2 τ_1 + i +2 , 1 - 2τ_1/2 τ_1 + i +2 , 0, 1 ) m_6 , m = 2 + 2i/2 τ_1 + i +2m_6
( -2/2 τ_1 + i +2 , 1 , -2τ_1 /2 τ_1 + i +2 , - 2τ_1 - 2i/2 τ_1 + i +2 , 1, 0 ) m_2 , m = 2 + 2i/2 τ_1 + i +2m_2
and in both cases the IR theory has moduli space given by ℂ^2/ℤ_5, from the discussion in <ref>.
As a D_n example, consider the case with n=4 and two orbits of size four, for which we find that there are two mass configurations that preserve 𝒟 = R_D,4∘ R_D,2∘σ∘ t^(i)∘ S with α = π/2
( m_0 , … , m_4 ) = (-i+1/2( τ_0 + τ_3 - 2/τ_3), i+1/2( τ_0 + i τ_3 - i - 1/τ_3), 0, 0, 1 ) m_4 ,
( m_0 , … , m_4 ) = (-i+1/2( τ_0 - i τ_3 - 2/τ_3), - -i+1/2( τ_0 + τ_3 - i - 1/τ_3), 0, 1, 0 ) m_3 ,
with global mass m=m_4/τ_3 and m=i m_3/τ_3 respectively. From the discussion in <ref>, in both cases the moduli space is simply the 2-fold Du Val singularity of type D.
More examples can be found in <ref>.
§ ACKNOWLEDGEMENTS
The authors would like to thank Simone Giacomelli, Azeem Hasan, Elias Riedel Gårding and Luigi Tizzano for useful comments and clarifying discussions.
R.A. and A.C. are respectively a Research Director and a Senior Research Associate of the F.R.S.-FNRS (Belgium).
The work of S.M. is supported by “Fondazione Angelo Della Riccia” and by funds from the Solvay Family. S.N.M. acknowledges the support from the Simons Foundation (grant #888984, Simons Collaboration on Global Categorical Symmetries).
V.T. is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC 2181/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster). This research is further supported by IISN-Belgium (convention 4.4503.15) and through an ARC advanced project.
§ DETAILS ON THE F-TERMS OF MASS DEFORMED D_N
In this section we want to identify the moduli space of the 𝒩=1 theory obtained by mass deformation of the D_n-shaped quiver gauge theory with gauge group SU(k)^4 ×SU(2k)^n-3.
We will use the following convention for fields: denote the adjoint fields with ϕ_i, the chiral fields that transform in the fundamental representation of an external node of the quiver with A and in their anti-fundamental representation with B, while the remaining field transforming in the bifundamental representation of SU(2k) gauge factors with X_i,i+1 so that
A_i=( (1)_i, (1)_2 ) , i = 0, 1 ,
B_i=( (1)_2, (1)_i ) , i = 0, 1 ,
A_j=( (1)_j, (1)_n-2) , j = n-1, n ,
B_j=( (1)_n-2, (1)_j ) , j = n-1, n ,
X_l,l+1=( (1)_l , (1)_l+1) , l = 2, … , n-3 ,
and accordingly for X_l+1,l. The generic superpotential deformed with mass terms for adjoints reads
W_𝒩=1 = ∑_i=0,1ϕ_i B_i A_i + ∑_j=n-1,nϕ_j B_j A_j
+ ϕ_2 ( A_0 B_0 + A_1 B_1 + X_23 X_32) + ϕ_n-2( A_n-1 B_n-1 + A_n B_n - X_n-2,n-3 X_n-3,n-2)
+ ∑_l=3^n-3ϕ_l ( X_l,l+1X_l+1,l - X_l,l-1X_l-1,l) + ∑_i=0^nm_i/2ϕ_i^2 .
We need to solve the F-terms, in order to find the equation that defines the moduli space and how it is affected by the choice of the masses. For the first goal, we rely on the computation carried out in <cit.> via the bug calculus graphical approach. In the following, we explicitly show how the masses affect the value of ϕ_i. The F-terms for the chiral fields are
ϕ_2 A_i = - A_i ϕ_i , i = 0, 1 ,
B_i ϕ_2 = - ϕ_i B_i , i = 0, 1 ,
ϕ_n-2 A_j = - A_j ϕ_j , j = n-1, n ,
B_j ϕ_n-2 = - ϕ_j B_j , j = n-1, n ,
ϕ_l X_l,l+1 = X_l,l+1ϕ_l+1 , l = 2, … , n-2 ,
X_l+1,lϕ_l = ϕ_l+1 X_l+1,l , l = 2, … , n-2 , ,
while for the adjoint fields
B_i A_i = - m_i ϕ_i , i = 0, 1 ,
B_j A_j = - m_j ϕ_j , j = n-1, n ,
A_0 B_0 + A_1 B_1 + X_23X_32 = - m_2 ϕ_2 ,
A_n-1 B_n-1 + A_n B_n - X_n-2,n-3X_n-3,n-2 = - m_n-2ϕ_n-2 ,
X_l,l+1X_l+1,l - X_l,l-1X_l-1,l = - m_l ϕ_l , l = 3, … , n-3 .
As we did for the A_n-1, we consider the moduli space of the theory with gauge group U(1)^4 ×U(2)^n-3, and the generic case will be given by the k-th symmetric product of this space.
Let us proceed in steps. First, we show that -ϕ_0 and -ϕ_1, which are complex numbers, are the eigenvalues of ϕ_2, which is a 2 × 2 matrix. <Ref> and <ref> have the form of a right and left eigenvalue equation for ϕ_2, and there are two eigenvectors A_0, B_0^T with eigenvalue -ϕ_0, and two A_1, B_1^T with eigenvalue -ϕ_1. Assume for now that none of them is a null vector, otherwise either m_i=0 or ϕ_i=0. By <ref> we see that A_i and B_i are not orthogonal, and there is no relation between A_0 and A_1. The way to accommodate them is that B_0^T and B_1^T are proportional to A_0 and A_1, respectively, and the latter are linearly independent. So it exists a matrix V_2, whose columns are the two eigenvectors A_0 and A_1, that diagonalizes ϕ_2. On the other hand, if we lift the non-null assumption, and consider that, say, B_0 = 0 = B_1, and m_0 = m_1 = 0 and again we are left with A_0 and A_1 as the two eigenvectors of ϕ_2. A similar reasoning holds for ϕ_n-2, whose eigenvalues are -ϕ_n-1 and -ϕ_n with eigenvectors A_n-1 and A_n. Hence
ϕ_2^d = V_2^-1ϕ_2 V_2 = (
[ - ϕ_0 0; 0 - ϕ_1 ])
ϕ_n-2^d = V_n-2^-1ϕ_n-2 V_n-2 = (
[ - ϕ_n-1 0; 0 - ϕ_n ]) .
As a second step, we show that all of these eigenvalues are equal. Consider
B_i ϕ_2 X_23 X_34… X_l,l+1… X_n-3,n-2 A_j , i=0,1 , j=n-1,n ,
and by <ref> and <ref> we can write it in two equivalent ways
- ϕ_i B_i X_23 X_34… X_l,l+1… X_n-3,n-2 A_j = B_i X_23ϕ_3 X_34… X_l,l+1… X_n-3,n-2 A_j , i=0,1 ,
j=n-1,n .
Using recursively <ref> and <ref> we can move the adjoint until the end
B_i X_23 X_34… X_l,l+1… X_n-3,n-2ϕ_n-2 A_j , i=0,1 , j=n-1,n ,
where we can use <ref> to write
-B_i X_23 X_34… X_l,l+1… X_n-3,n-2 A_j ϕ_j , i=0,1 , j=n-1,n ,
and comparing with <ref> we obtain
ϕ_i = ϕ_j := u , i=0,1 , j=n-1,n ,
ϕ_2^d = ϕ_n-2^d = - u 1_2× 2 .
As a third step, we show that all 2-dimensional matrices have the same eigenvalues. From <ref>, consider l=2, diagonalize ϕ_2 and use <ref>
ϕ_2 X_23 = X_23ϕ_3 = V_2 V_2^-1ϕ_2 V_2 V_2^-1X_23 = V_2 ϕ_2^d V_2^-1 X_23 = -u V_2 1_2×2 V_2^-1 X_23 = - u X_23 ,
and from second and last step this is now a left eigenvalue equation for ϕ_3, with eigenvalue ϕ associated to X_23. The same reasoning can be repeated for X_32ϕ_2 = ϕ_3 X_32 from <ref>, obtaining that X_23 and X_32 are the eigenvectors of ϕ_3 with eigenvalue -u. We get that ϕ_2^d = ϕ_3^d. We can recursively repeat the argument for all l, obtaining
ϕ_l = - u 1_2 × 2 , l = 2, … n-2 .
Note that the same reasoning holds in case of all vanishing masses, so that u is the variable that parametrizes the ℂ factor in the moduli space of the 𝒩=2 theory.
As a fourth step, we construct the X_l,l+1X_l+1,l. Taking the trace of <ref> and <ref>, and using <ref>-<ref> and the fact that ϕ_2 = ϕ_n-2 = - 2 u, we get
X_23X_32 = u ( m_0 + m_1 + 2m_2 ) ,
- X_n-2,n-3X_n-3,n-2 = u ( m_n-1 + m_n + 2m_n-2) .
Similarly, from <ref> we find that
X_l,l+1X_l+1,l = X_l,l-1X_l-1,l - m_l ϕ_l = X_l,l-1X_l-1,l + 2 m_l u , l = 3, … , n-3 ,
and using it recursively we get that
X_n-3,n-2X_n-2,n-3 = X_23X_32 + u ∑_l=3^n-3 2 m_l .
By inserting <ref> in <ref> and summing with <ref> we obtain
u ( m_0 + m_1 + ∑_l=2^n-2 2 m_l + m_n-1 + m_n ) = u m = 0 .
Similarly to what happens in the A_n-1 case, the global mass and the value of the adjoint fields are related: when the global mass is zero, u can be non-zero, while when the global mass m≠ 0, it is forced u=0.
Finding the form of moduli space for m=0 and u≠0 by solving directly the F-terms is quite involved. As for A_n-1, in <cit.> they deform the quiver gauge theory by FI terms b_i at each node and they carry out this computation exploiting the graphical tool of bug calculus. By comparison of the F-terms in <ref> : <ref> with the graphical representation in <cit.>, we can identify
b_i ↔ - m_i ϕ_i , ∀ i ,
where all FI-terms are subject to the condition
b_0 + b_1 + b_n-1 + b_n = 2 ∑_i=2^n-2 b_i ,
which translates in the trace of the sum of m_i ϕ_i, i.e.
( m_0 + m_1 + ∑_l=2^n-2 2 m_l + m_n-1 + m_n ) u = 0 .
§ ON 𝒩=1 MASS DEFORMATIONS PRESERVING NON-INVERTIBLE SYMMETRIES
We systematically study the solutions to the eigenvalue problem
𝒟 m⃗ = e^i α m⃗
when the global mass vanishes. We consider the action of the permutation first on the z_i, and then on the masses m_i. Let 𝒵 be the vector space spanned by the z_i's in A_n-1 configurations.
Note that 𝒟 can be block diagonalized in 𝒵 according to the orbit decomposition discussed in <ref>. Each block is a finite order matrix and hence can be diagonalized. Moreover, the minimal polynomial of each block is x^n-1, where n is the order of the orbit, and it divides the characteristic polynomial of the block, which is of the same order. Therefore, each orbit contributes n eigenvalues, specifically n districts roots of unity. Now, the masses m_i span a codimension one subspace ℳ of 𝒵. The direction orthogonal to ℳ in 𝒵 is generated by the vector of Dynkin labels n⃗ since m⃗·n⃗ = 0, and it is associated with an eigenvector of the permutation matrix with eigenvalue 1. Thus, the defect 𝒟 acting on ℳ retains all the eigenvectors but the one dual to n⃗.
In conclusion, within the space of mass deformations solving <ref>, those corresponding to the eigenvalue e^i α where (e^i α)^k=1 and e^i α≠ 1, span a subspace of dimension the number of orbits of order a multiple of k. Mass deformations solving <ref> with eigenvalue e^i α=1 rather span a subspace of dimension the total number of orbits minus 1. This reasoning applies both for duality and triality symmetries of A_n-1 quivers. Since every D_n configuration of marked points can be seen as special A_2n configuration, where the tiltings associated to a puncture and its image under R_D satisfy z_i'=-z_i, the logic also applies to D_n quivers provided one restricts to the subspace of 𝒵 (and ℳ) satisfying this additional constraint.
This rationale translates into an efficient method for computing which mass deformations preserve non-invertible duality defects in general cases. While the system of <ref> can in principle always be explicitly solved by brute force, in practice it becomes rapidly cumbersome as the number of punctures grows. However, the underlying orbit structure allows the advertised more efficient calculation.
The main point is to trade the `physical' basis of ℳ for another basis adapted to the orbit decomposition under 𝒟. For example, in A_3 as studied in <ref>, a convenient choice[Since n_1+n_2+n_3+n_4=0, here “basis" is to be understood as “generating set".] is:
(n_1,n_2,n_3,n_4)=(z_3-z_1,z_4-z_3,z_2-z_4,z_1-z_2) .
It satisfies the appreciable property that
(n_1',n_2',n_3',n_4')=(n_4,n_1,n_2,n_3) ,
which in turn simplifies the analysis of the condition (n_1',n_2',n_3',n_4') =α(n_1,n_2,n_3,n_4): e^i α must satisfy e^4 i α=1 and e^i α≠ 1 (so that the global mass vanishes). This result is equivalent to the one of <ref>, as
n_1 = m_1+m_2 , n_2 = m_3 , n_3=-m_2-m_3 , n_4 = -m_1 ,
is invertible.
To analyse a general configuration one needs to group the punctures into orbits under t∘ S. An example is shown in <Ref>, which displays a configuration consisting of two orbits of size four and one orbit of size two under σ∘ t∘ S on E_i.
The 9-uplet (m_1,…,m_9) is a basis of ℳ and we also show n_1=-(m_3+m_4+m_5) and n_2=-(m_7+m_8+m_9) in <Ref> for symmetry. Under σ∘ t∘ S:
(m_1',m_2',m_3',m_4',m_5',m_6',m_7',m_8',m_9') = (-m_1,m_2-n_1,n_1,m_3,m_4,n_1+m_6-n_2,n_2,m_7,m_8) .
Imposing m_i'=α m_i ∀ i with α an i-independent phase yields conditions which also split into orbits:
-m_1 =e^i α m_1 ,
(n_1,m_3,m_4,m_5) =e^i α(m_3,m_4,m_5,n_1) ,
(n_2,m_7,m_8,m_9) =e^i α(m_7,m_8,m_9,n_2) ,
m_2-n_1 = e^i α m_2 ,
n_1+m_6-n_1 = e^i α m_6.
If m_1≠ 0 then e^i α=-1, and all remaining masses are determined by, say, m_1,m_3 and m_7. If m_1=0 and at least one of n_1,m_3,m_4,m_5,n_2,m_7,m_8 or m_9 is non-zero, then e^4 i α=1 and e^i α≠ 1. As before, all masses can be expressed in terms of, say, m_3 and m_7. Last, if only m_2 and m_6 which connect different orbits are non-zero, then e^i α=1, and m_2,m_6 are free parameters. This generalizes to any configuration of punctures, leading to the count of deformation parameters preserving non-invertible symmetries written in <Ref>.
The same strategy applies to triality defects of order 6 and 3, as well as to duality and triality symmetries of D_n quivers, with the additional constraint evoked above.
§ EXAMPLES WITH MASS DEFORMATIONS
§.§ Duality defects
§.§.§ Duality symmetry for D_4 with vanishing global mass
The configuration for D_4 requires 8 marked points to be placed on the torus 𝕋^2, and we can organize them in either two orbits of size four, or one orbit of size four and one orbit of size 2, where the latter requires two points to be placed on top of ℤ_2 fixed points, consequently their ℤ_2 images sits on the same position. Despite two marked points on the same location would lead to inconsistency in the A_n-1 case, in the D_n case a marked point and its orientifold image can indeed sit on the same point. This is consistent with the construction in <ref> as well as with the definition of the τ_i, <ref>, where none of them vanish in the present configuration. Since the first configuration is discussed in <ref>, here we examine the second one. To be precise, we place the points as
p_1 = 1/2 , p_3 = i p_2 + 1 , p_4 = i/2 ,
where p_2 is free to be placed with 0 < Re (p_2) < Im (p_2) < 1/2. Starting with this configuration, we can define a non-invertible duality defect as 𝒟 = R_D,3∘σ∘ t ∘ S, with σ = s_3 s_2 s_3 s_1 s_2 s_3 and t = t_1^(i) t_2^(i) t_3^(i). On the masses, we have that
𝒟 : (m_0,m_1,m_2,m_3,m_4) → (-m_4, -m_3, -m_2, -m_0, -m_1) ,
and the solutions of the eigenvalue equation 𝒟m⃗ = e^i αm⃗ are
( m_0 , … , m_4 ) = (i, -i, 0, -1, 1 ) m_4 , for α = π/2 ,
( m_0 , … , m_4 ) = (-i, i, 0, -1, 1 ) m_4 , for α = - π/2 ,
( m_0 , … , m_4 ) = (-1, -1, 0, 1, 1 ) m_4 , for α = 0 .
The deformed theory's moduli space is given by
x^2 + y^2 w = w ( w^2 - v^4 ),
for both the first and the second solution and with v=m_4 u, while the others lead to
x^2 + y^2 w = w ( w + v^2 )^2 .
§.§ Triality defects
We provide some examples with duality symmetries that involve an ST transformation, whose action is discussed around <ref> and that leaves τ = ρ = e^2 π i/3 invariant. The allowed orbits are the following. Orbits of size one are given be the vertices of the fundamental cell. The single orbit of size two consists of points denoted by C_1 and C_2 with coordinates,
C_1 = 1/2 + i √(3)/6 = ρ+2/3 , C_2 = i√(3)/3 = 2 ρ + 1/3 ,
The orbit of size three is realized with the three points
q_1 = 1/2 , q_2 = ρ/2 , q_3 = ρ+1/2 ,
while the orbits of size six are given by the points placed at
p_1 = α , p_2 = ρ^2 α - ρ^2 , p_3 = - ρα + ρ ,
p_4 = 1 + ρα , p_5 = - ρ^2 α , p_6 = - ρ^2 - α ,
with α in the triangle (0, 1, ρ +1 ).
§.§ Mass deformed A_1 quiver theory
Vanishing global mass
Consider the theory A_1 at τ = ρ = e^i 2 π/3 with an orbit of size 2, and global mass m = 0. This configuration preserves a defect 𝒟 = s_1 ∘ t_1^(ρ)∘ S T. The unique solution reads
[ α = - π/2 (m_1, - m_1) xy = w (w - um_1) , ]
which flows to the conifold.
Non-vanishing global mass
In the case of A_1 with global mass m ≠ 0, the masses transform as
- m_1 - m ρ (τ_1 + 1) = - ρ m_1 ,
m_1 + m ρτ_1 = - ρ m_2 ,
whose solution is (1/2,1) m_2.
§.§ A_7 with vanishing global mass
Let us compute the mass deformations of the A_7 quiver gauge theory which preserve the non-invertible triality symmetry of order 6. The eight corresponding punctures at τ= e^2iπ/3 necessarily split into an orbit of size 6 and an orbit of size two as displayed in <ref>.
We apply the strategy outline in <ref> and consider the masses shown in <ref>. Under the action of ST composed with deck transformations and a permutation, the mass deformation preserves the non-invertible triality symmetry if and only if they solve:
(n,m_1,m_2,m_3,m_4,m_5) = e^iα(m_1,m_2,m_3,m_4,m_5,n)
-m_6 = e^iαm_6
m_1 + m_7 + m_6 = e^iαm_7
We can then read directly that
* If (e^iα)^6=1 and e^iα≠±1, then all masses are determined by m_1,
* If e^iα=-1 then all masses are determined by m_6 and m_1,
* If e^iα=1 then m_7 is the only free parameter, as all other masses have to be set to zero.
This result translates in any other mass basis, for example the physical one.
§.§ Mass deformed D_4 quiver theory
Vanishing global mass
Consider the theory D_4 at τ = ρ = e^i 2 π/3 with an orbit of size 6 and one orbit of size 2. The configuration preserves a defect 𝒟 = R_D,1∘ R_D,4∘ s_2 s_1 ∘ t_1^(ρ)t_2^(ρ)t_4^(ρ)∘ ST, which transforms the masses as
𝒟 : (m_0, m_1, m_2, m_3, m_4) → (m_2, m_0 + m_1 + m_2, - m_1 - m_2, m_1 + m_2 + m_4, m_1 + m_2 + m_3) .
The solutions are
[ α = π (1,0,-1,0,1)m_0 x^2 + y^2 w = w^3 + 12 t^2 w^2 + 30 t^4w + 6 t^4 y + 28 t^6; α = π (1,0,-1,1,0)m_0 x^2 + y^2 w = w^3 + 12 t^2 w^2 + 30 t^4w + 6 t^4 y + 28 t^6; α = π/3 (-1,i √(3),ρ,1,1)m_3 x^2 + y^2 w = w^3 + ws^4 - w^2 s^2; α = 4π/3 (-1,-i √(3),ρ,1,1)m_3 x^2 + y^2 w = w^3 + 4 ws^4 - 4 w^2 s^2 - 3 s^6 ]
where t = u m_0/2 and s=u m_3.
Non-vanishing global mass
In the case of D_4 with global mass m ≠ 0, the masses transform as
m_2 + m (1 - τ_1) = - ρ m_0 ,
m_0 + m_1 + m_2 + m (ρ^2 - ρ)( 1 - τ_1) = - ρ m_1 ,
- m_1 - m_2 - m ρ^2 (1 - τ_1) = - ρ m_2 ,
m_1 + m_2 + m_4 + m [ - 1/3ρ( ρ - 1 ) + ( ρ + 1 ) ( - ρτ_1 + ρ - 1 ) ] = - ρ m_3 ,
m_1 + m_2 + m_3 + m [ 2/3( ρ^2 - 1 ) + τ_1 ] = - ρ m_4 ,
whose unique solution is
( √(3)τ_1 - 1/√(3) (3 - τ_1) + 2 i , 3(τ_1 - 1)(i + 2 √(3))/√(3)(τ_1 - 3) - 2i , 1/2(τ_1 - 1)(3i + 7 √(3))/√(3)(3 - τ_1) + 2i , 1 + i + √(3)/√(3)(τ_1 - 3) - 2i , 1 ) m_4 ,
with global mass
m = 45 i + 21 √(3)/23 √(3) - 3 (2i + 3 √(3) )τ_1 + 36 i .
JHEP
|
http://arxiv.org/abs/2409.03614v1 | 20240905151844 | 1 Modular Parallel Manipulator for Long-Term Soft Robotic Data Collection | [
"Kiyn Chin",
"Carmel Majidi",
"Abhinav Gupta"
] | cs.RO | [
"cs.RO",
"cs.LG"
] |
1
Long-Term Soft Robotic Data Collection]Modular Parallel Manipulator for Long-Term Soft Robotic Data Collection
]
[
[
September 9, 2024
=====================
§ ABSTRACT
§
Performing long-term experimentation or large-scale data collection for machine learning in the field of soft robotics is challenging, due to the hardware robustness and experimental flexibility required. In this work, we propose a modular parallel robotic manipulation platform suitable for such large-scale data collection and compatible with various soft-robotic fabrication methods. Considering the computational and theoretical difficulty of replicating the high-fidelity, faster-than-real-time simulations that enable large-scale data collection in rigid robotic systems, a robust soft-robotic hardware platform becomes a high priority development task for the field.
The platform's modules consist of a pair of off-the-shelf electrical motors which actuate a customizable finger consisting of a compliant parallel structure. The parallel mechanism of the finger can be as simple as a single 3D-printed urethane or molded silicone bulk structure, due to the motors being able to fully actuate a passive structure. This design flexibility allows experimentation with soft mechanism varied geometries, bulk properties and surface properties. Additionally, while the parallel mechanism does not require separate electronics or additional parts, these can be included, and it can be constructed using multi-functional soft materials to study compatible soft sensors and actuators in the learning process. In this work, we validate the platform's ability to be used for policy gradient reinforcement learning directly on hardware in a benchmark 2D manipulation task. We additionally demonstrate compatibility with multiple fingers and characterize the design constraints for compatible extensions.
§ KEYWORDS:
robotics, soft robotics, soft materials, reinforcement learning
§ INTRODUCTION
Soft robotic systems are often operated using open-loop control methods that leverage the inherent compliance and conformability of soft materials as they press against a contacting surfaces. Popular examples of this are compliant grippers that can conform to a wide range of objects with limited sensing<cit.><cit.> and soft robots that can passively shape change as they pass through confined spaces <cit.>. While material compliance enables intrinsic reactivity in the form of material deformation upon contact, there is still a need for the ability to control the positioning of soft robotic systems in free space, or to specify contact beyond simply modulating force. Computing the deformation of soft materials in isolation can be time intensive. The creation of A priori models of soft robotic systems, whose behavior might be dependent on interactions between soft materials with specific, potentially novel compositions or geometries is usually not effective. Therefore, to create models or controllers for soft robotic systems, data-driven methods are quite appealing. Machine learning provides the potential to overcome the challenges of understanding soft material behavior from first principles. However to enable operation of soft material-based robotic systems over longer periods of time, as would be required of a useful tool, the way that soft materials change over time is another reason to move towards the ability to collect larger amounts of hardware data.
While soft robots can be accessible to build, effective control strategies are less clearly available. Soft robot dynamics are difficult to accurately model analytically, due to a multi-physics coupling between shape, forces, physical state (e.g. temperature), and history of motion. To enable operation of soft robots across longer time scales, it is important to be able to collect hardware data that capture these phenomena, especially those that vary with time. As shown in Fig. <ref>, the deformation of elastomeric materials has multiple forms of time dependent phenomena. There is hysteresis, which means there is no one-to-one mapping from actuation force to deformation – instead result of any applied force or deformation depends on the recent history of the system. There is also nonstationarity, which is a distribution shift in the underlying dynamics of a system over time, and can be caused by wear, external temperature fluctuations, internal strain build-up, or many other sources. Hysteresis and nonstationarity are inherent side-effects of the use of elastomers and other soft materials in the construction of soft robot systems. Popular methods often ignore these effects or treat them as unmodeled noise<cit.><cit.>. Hysteresis can theoretically be addressed by incorporating system state history into the input of models. This explicit time dependence can be encoded in structures like recurrent neural networks<cit.> or by simply concatenating multiple time-steps of state data as the input to the model. Nonstationary behavior necessitates models that can adapt to changing dynamics <cit.>.
These inherent dynamics present an essential problem to the autonomy of soft robotic systems (though deliberate changes to dynamics are an emerging feature of soft robotics <cit.>), and may also function as a reasonable proxy for the general problem of imprecise dynamics. Errors in modeling and the drift in models over time exist in rigid-body robotic systems. As described in a report on the problems faced by teams in the DARPA Robotics challenge, on the scale of full humanoid systems, modeling errors due to "wrong kinematic and inertial parameters, cogging and other magnetic effects in electric motors, actuator dynamics and hysteresis, variable viscous or nonlinear dynamic friction depending on the state of hydraulic oil leakage, dust, and dirt, thermal (both external weather and actuator heating) effects, humidity effects, constant delay, variable delay due to computer communications. joint/actuator/transmission stiction and other static friction effects, foot slip-stick on touchdown and during stance, six dimensional bearing play, structural link deformation, and material aging" <cit.> all come into play. More generally, most classes of robotic system can encounter problems with dynamics which do not match the model, either due to mis-specification or nonstationary dynamics. This is especially true for soft robots and systems composed of mechanically compliant materials.
In traditional control theory, the nonstationarity problem is among those problems addressed by the subfield of adaptive control. Traditional adaptive control methods often rely on high-fidelity equations of motion with a small number of uncertain parameters<cit.>. There has been work integrating these techniques with neural network models, an approach called concurrent-learning adaptive control (CLAC) <cit.>. Other approaches have leveraged state-estimation techniques to update analytical parameter sets (Kalman) <cit.>.
In the reinforcement learning literature, there have been a few works attempting to solve the problem of learning control under nonstationary dynamics via multiple partial models. These generally train several neural networks for different regions of the drifting dynamics. Multiple Model-based Reinforcement Learning (MMRL) <cit.> creates a static library of neural models which correspond to a known set of dynamics, and are trained as the system moves through those different modes. Reinforcement Learning with Context Detection (RL-CD)<cit.> trains in a similar way, but rather than a static set of models, there is a context detection module which allows for new models to be created as the system encounters different modes. Hierarchical Reinforcement Learning with Context Detection (HRL-CD)<cit.> combines the technique of hierarchical reinforcement learning for accelerating learning convergence with the RL-CD framework. All of these techniques make the assumption of deterministic and discrete dynamics.
There has been work in enabling faster transfer learning for problems in related domains using neural attention and transformer networks <cit.>. Similarly automatic domain randomization (ADR) has been shown to allow for the online formation of reactive controllers which can adapt to a wide range of environmental parameters<cit.>. These methods rely on extremely high-volume data collection for training, requiring high fidelity simulation and compute time on a scale which can be inaccessible to resource-constrained development environments.
Learning for nonstationary dynamics can be seen as an extension of the canonical robot model learning problem. For a system with stationary dynamics, there is the assumption that all experience tuples (x_t, u_t, x_t+1) are drawn from the same distribution subject to the constraint of the true system dynamics x_t+1 = F(x_t,u_t), where x_t represents the state of the system at time t and u_t is the control effort at the same time. Since all data provides meaningful, if noisy, information about the true dynamics of the system, all tuples can be used to refine the fidelity of the agents dynamics estimate. In the nonstationary case, this assumption is violated.
For the nonstationarity case, the ground truth system dynamics vary as a function of time. While this variance can be in the form of the dynamics as well as in the values of dynamics parameters, we assume that there exists some parameterization of the variation such that we can write the dynamics as x_t+1 = F(x_t,u_t, ψ_t), where ψ_t is the parameter vector determining the dynamics at time t.
The behavior of the vector ψ_t is generally unconstrained, and therefore requires domain specific knowledge to create a reasonable set of assumptions.
Three cases for how the dynamics vector ψ_t might evolve:
* Trends in the dynamics, e.g. material degradation, polymer creep, or thermal buildup. These are gradual non-periodic changes.
* Random or event-driven changes to dynamics , e.g. mechanism damage, system repair, parts replacement, or power-cycling. These changes can be unknown or known, and are discrete jumps in the system dynamics.
* Cyclical or oscillating dynamics e.g. due to environmental changes corresponding with day-night cycles. These are finite periodic changes
§ METHODS
To better understand the dynamics of soft materials in a long-term experimental environment, we developed a robotic experimental platform which allows automatic performance of a class of simple manipulation tasks using mechanisms made from soft materials. The platform is relatively inexpensive to fabricate and capable of operating for long periods of time. The behavior is highly coupled to the material properties of the soft materials used in its construction, allowing insights into those properties. We demonstrate the ability to train reinforcement learning policies using only hardware data collected with this system.
§.§ Motor-driven Soft Parallel Mechanism
In order to create a system that can operate for long periods of time during data collection for machine learning or other long-term experiments, we use reliable electric servomotors as the actuators in our design. As many of the moving and interacting parts of the robotic system should be made of soft materials for the properties of those soft materials to be encoded in data of robot motion. Therefore, we employ the electric motors to actuate a soft matter parallel mechanism, a relatively under-explored strategy that has been shown to enable robust, versatile experimental platforms <cit.>. The usage of soft materials was focused on the mobile mechanism rather than the whole system because the system components that are not the focus of study should provide as little disturbance to the behavior of the system over time. While soft actuators are diverse, enable many unique properties for robotic systems, and provide a path to fully soft systems, they have relatively complex dynamics. Additionally, the kinds of materials used in their manufacture are not as diverse as for a passive mechanism, and many classes of soft actuator require specialized development and operational environments to maintain <cit.>. Electric motors are especially robust forms of actuation with minimally impactful actuator dynamics. When these motors are coupled with internal closed loop controllers, this even more the case. We choose to base our designs on planar parallel mechanism, the compliant five-bar.
The choice of a five-bar mechanism as the soft structure the robot uses for interaction is helpful partly because of the many ways it can be represented. It is feasible that the geometry of the compliant five-bar could be modeled by discrete elastic rod simulation environment<cit.>. This simulation strategy is one of the fastest found for soft materials, and provides one feature space that could be useful to characterize our system with. Additionally, the approximation of the compliant five bar via the kinematics of a rigid-body five-bar <ref>A provides an avenue for the comparison of the effects of the soft materials against a close geometric analogue. The difference in fidelity can be intuitively understood by examining the fact that different thicknesses of joint produce different behavior of a parallel structure, due to changes to spring stiffness (Fig <ref>), but all joint thicknesses are represented with the same rigid approximation. Parameter estimation of the associated a rigid approximation might provide useful data for adaptation.
§.§ Compliant Five-bar Modules
The modules are actuated by two servomotors with aligned axes of rotation mounted next to each other in 3D printed TPU housing, assembled with friction fit <ref>. The module design is compatible with both lower cost servomotor (Dynamixel XC430-W150-T, $120), and higher performance servomotor (Dynamixel XM430-W210, $270).The base of the modules are attached to a mechanical breadboard with bolts. These mechanical design choices improve the ability of the system to operate for longer periods of time without requiring repair, as well as simplifying repair <ref>. The primary moving parts of the modules are compliant five-bar mechanisms which mount to the servomotors. The state of an individual module is determined by the angle of the driving servos. There is no autonomously controllable nonstationarity for this system. Instead, the soft components to are the primary source of trends over time in the dynamics. The dynamics can be controlled by swapping out five-bars, which slot onto 3D printed quick-swap servo horns.
§.§ Manipulation Task: Knob Turning
This task is adapted from the ROBEL robotics learning benchmark designed by Ahn et. al <cit.>, replacing the rigid-body robot they use with soft five-bar modules. There is a knob-like object in the center of the work space, mounted to a servomotor. The servomotor enables encoder-based state estimation of the knob's pose, and allows the autonomous resetting of the system to a nominal state. This is a critical feature for running machine learning experiments autonomously. This need is also a major motivator of the simple planar geometry chosen for the task. Soft five-bar modules are arrange surrounding the knob, and are tasked with turning the knob to a desired pose. This general task outline can be modified by changing the location of modules, geometry of five-bars, or temporal dynamics of the pose desired, either static goal poses, or pose trajectories. The use of a servo to mount the knob also allows adjustment of the knob's stiffness (as long as the servo used has a torque-control mode).
To maximize the learning speed on hardware, we again leverage dimensionality reduction via action space discretization. There is a relatively natural choice of discrete actions for electric motors in a quasistatic context: increasing or decreasing the angle of the motor by some static amount. These primitives span the space of available states for a motor. This is a superset of the valid states of a module, but by applying geometry-dependent limits to servo position, the full state of each module can be explored by these primitives. The full manipulator array would then have an action space of
a_t ∈{mag*Δθ (servo_i)}, where i ∈{06} and mag ∈±angular resolution
However, this low level of abstraction poses some challenges to the speed of training. For studying the ability to bootstrap autonomy, such low level action spaces are often chosen to minimize learned bias and reliance on human design decisions. However, for the goal of making robotic systems development more accessible, there is a different set of considerations. This system is designed to allow more hardware data to be collected than many fully soft robots, yet there is still a time cost of collecting that data. While the easily replaceable modules and resilient mechanical design minimizes damage to the most expensive hardware components, we still aim to minimize the amount of time necessary run hardware to collect data. Therefore, the action space is chosen to maximize the data-efficiency of the learning process, incorporating as much human intuition as possible. First, an intermediate action space that allows the tip position of each module to be controlled in a "extend", "retract", "left", "right" scheme is developed. Then we build the actual primitives upon this as trajectories of intermediate positions that produce a distinct sweeping motion over several time-steps. Specifically, the motion is a sequence of "move left" → "extend" → "move right" → "retract", shown in Figure <ref>. This sweeping motion and its mirror are able to be performed by each module, resulting in an action space of:
a_t ∈sweep(module_i, dir), where i∈{0,1,2}, and dir∈{left,right}
These primitives are hierarchical, being built first upon the lowest-level motor delta primitives, and intermediate tip-control primitives. While not explored, this leaves space for learning the dynamics at a lower level more optimal or less biased behavior is desired later on. These sweeping primitives are very likely to move the knob if the moving module is aligned with a lobe of the knob geometry. This means that the learning problem is figuring out how to sequence the primitives to complete the task, requiring the learning of some latent representation of the relative geometries of the modules and the knob, and the way the modules interact when making contact with the knob, but not needing to learn coherent motion.
For the following results, the task is learning to turn the knob to a defined goal position from a defined start position.
§ RESULTS
§.§ Reinforcement Learning on Hardware
Reinforcement learning was used to learn this task from empirical hardware data. We used policy gradient methods from the Stable Baselines3 repository of standard reinforcement learning algorithms <cit.>, specifically Proximal Policy Optimization (PPO) with a batch size of 64 and a learning rate of 1e^-3. We wrapped the hardware controller for the modules in a Gym environment.
The action space of the reinforcement learning problem was the sweeping primitives derived in the previous section. The state space is the servo angles of each module, normalized between -1 and 1. Appended to this is the knob pose, represented as a two-element vector of the cosine of knob angle and the sine of the knob angle, for better numerical performance in the policy network. The reward used for reinforcement learning was
Reward = -5 ×| Δθ_knob|
+ 10 ×1{| Δθ_knob| ≥0.25/π}
+ 50 ×1{| Δθ_knob| ≥0.10/π},
corresponding to a smoothly increasing cost of 5 times the angle of the knob in radians, penalizing being farther away from the goal position of 0 radians. There is a big reward jump of 10 if within 0.25/π radians, and an even bigger reward of 50 if within 0.10/π radians of the goal position.
The reinforcement learning experiments were run in increasing lengths of time, following a powers of two progression. The first experiment was 2^14 = 16, 284 time-steps and progressed up to 2^16=65,536 time steps. The next step up, 2^17 = 131,072 time-steps, was also run, but it was partway into this that the system finally experienced a hardware failure in the form of one soft finger failing at a hinge and pushing itself out of its housing, like the failure mode shown in Figure <ref>B. The repair process for such a failure was rather quick, requiring only the attachment of a new finger with bolts to the quick-swap servo horn, and the fastener-free reassembly of the module.
The length of a time step is the amount of time it takes to execute a primitive, which is about 5 seconds. This means that the longest completed continuous batch of data collection, the 65k batch, ran for 3.76 days straight. In all, counting from the time the system was activated, all the completed batches as well as the 10,714 time-steps completed in the 131k batch before hardware failure, the system was active for 2^14+2^15+2^16+10,714 = 125,402 time-steps, or 7.25 days. There were automated cooldown periods between batches, totalling to 2 hours of downtime, meaning a total 7.34 days from the beginning of the data collection. This means the system operated for over a week at 98.7% uptime before requiring repair.
The system was able to improve in task performance over time, as shown in Figure <ref>. Based on the reward function, achieving a mean reward of 10 was significant as it requires quickly moving the system close to the goal without spending many time steps. The episode length of 4 is significant as that was almost the fastest possible execution (shown in Figure <ref>. Episode lengths of 3 were observed, but they involved hard-to-replicate interactions, where due to the angle of contact, elastic energy was stored in the five-bar before being released and pushing the knob further than the quasistatic sweeping motion.
Once the ability to collect enough hardware data to perform reinforcement learning was demonstrated, we evaluated the platform's ability to provide insight into the effects of different soft materials on the dynamics.
§.§ Policy Transfer Between Soft Materials
Previous results were obtained using 3D printed thermoplastic urethane five-bars. We created new compliant five-bars from injection-molded silicone (Ecoflex 00-30, Smooth-On), a much softer material. These were manufactured with a desktop injection molding setup, powered by compressed air. We trained the system to perform the knob turning task using the reinforcement learning system outlined above. In Figure <ref>, we can see the execution of this policy on hardware.
We then characterized the performance of the policy learned on the silicone five-bars when executed on silicone-five bars as well as on TPU five-bars. We also characterized the policy learned on the TPU five-bars to both systems. The results of these experiments are shown in Figure <ref>A.
The best performing case was TPU-trained policy executed on TPU. It might be expected that the dominant factor in transfer performance is similarity between system dynamics during training and system dynamics during execution. However, we see that the next best performing case is a TPU-trained policy executed on silicone. This suggests that the material properties of TPU are more efficient for learning. This makes sense as the softness of silicone means that deformation of the five-bar links outside of the living hinges are more significant, as shown in Figure <ref>B. This provides more possible physical configurations for a given motor position. This effectively increases the noisiness of state transitions, meaning learning is a longer process. Additionally, the worst performing case is a silicone-trained policy on a silicone five-bar. The fact that a policy trained on silicone performs better on TPU indicates that the stiffness of TPU is a boon for executing actions reliably.
The ability to extract insights about the suitability of different materials for this task form simple experiments like this supports the utility of this kind of research platform for studying the effect of soft material dynamics on robotic system operation.
§ ACKNOWLEDGEMENTS
Thank you to Tess Hellebrekers for fabrication of knob hardware, to Richard Desatnik for fabrication of silicone five-bars. Thank you to Vikash Kumar, for firmware optimization for the Dynamixel servos and introduction to ROBEL task.
§ CONFLICT OF INTEREST STATEMENT
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
§ DATA AVAILABILITY STATEMENT
The code for this study can be found in the following Github repositoryhttps://github.com/kiynchin/module-nonstationarity.
unsrtnat
|
http://arxiv.org/abs/2409.02907v1 | 20240904174814 | GraphTrials: Visual Proofs of Graph Properties | [
"Henry Förster",
"Felix Klesen",
"Tim Dwyer",
"Peter Eades",
"Seok-Hee Hong",
"Stephen G. Kobourov",
"Giuseppe Liotta",
"Kazuo Misue",
"Fabrizio Montecchiani",
"Alexander Pastukhov",
"Falk Schreiber"
] | cs.HC | [
"cs.HC"
] |
Pseudospectral method for solving PDEs using Matrix Product States
Juan José García-Ripoll
Received ????; accepted ????
==================================================================
§ ABSTRACT
Graph and network visualization supports exploration, analysis and communication of relational data arising in many domains: from biological and social networks, to transportation and powergrid systems. With the arrival of AI-based question-answering tools, issues of trustworthiness and explainability of generated answers motivate a greater role for visualization. In the context of graphs, we see the
need for visualizations that can convince a critical audience that an assertion about the graph under analysis is valid. The requirements for such representations that convey precisely one specific graph property are quite different from standard network visualization criteria which optimize general aesthetics and readability.
In this paper, we aim to provide a comprehensive introduction to visual proofs of graph properties and a foundation for further research in the area.
We present a framework that defines what it means to visually prove a graph property. In the process, we introduce the notion of a visual certificate, that is, a specialized faithful graph visualization that leverages the viewer's perception, in particular, pre-attentive processing (e. g. via pop-out effects), to verify a given assertion about the represented graph. We also discuss the relationships between visual complexity, cognitive load and complexity theory, and propose a classification based on visual proof complexity.
Finally, we provide examples of visual certificates for problems in different visual proof complexity classes.
This work was initiated at Dagstuhl seminar 23051 “Perception in Network Visualization”. We thank the organizers for making this fruitful interdisciplinary exchange possible and all participants for interesting discussions and insights during the seminar week.
§ INTRODUCTION
While state-of-the-art graph and network visualization techniques do a reasonable job of untangling graphs to convey meaning and support free-form exploration, there are certain application scenarios where these algorithms fall short. Namely, we focus on applications where it is necessary to convince a (possibly non-expert) audience that a particular graph has some structural property. We emphasize that this kind of application scenario differs significantly from the traditional usage of visualization to generate new knowledge. Namely, existing graph and network visualization techniques have sought mainly to represent all aspects of a graph or network structure as faithfully as possible such that a user can explore the visualization, identify structures, and gain insights about the underlying data.
These traditional visualization techniques can be sufficient for journalists and other communicators to support a narrative in print or on-line media <cit.> by showing only selected views of graphs.
However, novel approaches are required in our setting in which a specific property of the data is to be conveyed in an adversarial setting where
the validity of the evidence presented may be questioned (see also the defense lawyer role in Fig. <ref> which may, e. g., represent doubts of the audience). For example, the investigative activity of the Italian Revenue Agency (IRA) exploits the visual analysis of social networks whose nodes are the actors of potential fraudulent activities and whose edges represent financial/legal transactions between the actors. The investigators of IRA who suspect a group of persons or a single individual/company of tax evasion submit a case to the Italian financial Police for possible prosecution, which also implies showing some structural properties of the network beyond reasonable doubts. See, e. g., <cit.> for references about the use of visual analytics in the context of contrasting tax evasion in Italy.[One author has been approached by the Australian Security and Investments
Commission (a governmental regulator for stock
exchange) inquiring about visualizations to convince a court about illegal trades.] Below we describe introductory examples.
A network admin discovers that two critical parts of the infrastructure would not be able to communicate with each other if a particular switch fails. To increase the robustness of the network, new hardware is needed. They have to convince the manager, who has no background in network security, to fund new hardware.
In a legal court case, the prosecution discovered that money acquired in black market sales was laundered by laundromat chain as evidenced by money provably transferred via a complicated network from the dealers to the laundromats. The prosecution has to convince the judge that all suspects belong to the criminal syndicate.
A new AI based heuristic is able to efficiently decide if a given graph is Hamiltonian, i. e., to test if it contains a cycle traversing all its vertices exactly once[Note that neural network approaches for NP-hard problems have been described, e. g., in <cit.>. In addition, the need for visualizations in the context of explainable deep learning has been described, e. g., in <cit.>.]. However, false positives must be filtered out. A human operator needs to perform this task as there is no efficient algorithm. To facilitate this, the new version of the algorithm should also create a visualization of the graph making the Hamiltonian cycle obvious to the operator.
Such scenarios have key differences to standard motivations for graph visualization. Typical graph visualization techniques (node-link layout algorithms <cit.>, matrix ordering approaches <cit.> and mixed approaches which either include features of different paradigms <cit.> or show different visualizations side-by-side <cit.>) usually seek a representation showing as many graph properties as possible simultaneously (by trading off aesthetic and readability criteria <cit.>). However, for the scenarios above it is better to focus on showing optimally and faithfully just one specific property, i. e., we want a visual proof for that property.
More precisely, a visual proof is a proof given by the use of a graphical or visual representation called visual certificate.
A good visual proof should be clear and concise, conveying the main idea in an easy-to-understand way. It should be able to effectively communicate the desired message without being overly complex or cluttered. Additionally, the visual certificate should be aesthetically pleasing and easy to interpret. Somehow, it should be able to provide evidence to support the argument being made. Thus, a good visual certificate should be accurate, concise, and free of errors or mistakes.
In fact, visual proofs are already used in mathematics and other areas such as logic, graph theory, computer science, and physics <cit.>; visual proofs are often easier to understand than algebraic proofs, as they are less abstract and easier to follow. Accessible proofs are often considered more beautiful by mathematicians; e. g., Appel and Haken employed a computer-assisted proof of the long-open four-color theorem in 1976 <cit.>. This new type of proof sparked philosophical debates <cit.> and while the theorem is broadly accepted as proved[According to the Oxford English Dictionary, it is yet to be proven as a “mathematical theorem” <cit.>.], researchers still desire a more elegant proof <cit.>. Thus, we expect that visual proofs are appealing and even more convincing to experts also in fields other than mathematics.
Visual proofs can also convey properties to non-expert users or explain correctness of AI-generated solutions.
As powerful chat-based interfaces are capable of generating plausible sounding – but difficult to verify – explanations of complex phenomena,
we believe that there is a requirement to understand what makes a graph representation a proper visual certificate.
Contribution.
We introduce a model identifying important steps and their interactions in a visual proof of a graph property. Based on this model, we formalize the concept of visual certificates and give requirements for a visualization to qualify as such. We also give examples of visual proofs for widely used graph properties and identify open research questions that should be answered to better understand visual proofs and make them algorithmically usable.
§ FIRST EXAMPLES OF VISUAL PROOFS
§.§ Example <ref>: The Graph contains a Cut-Vertex
First, we revisit the situation in Ex. <ref>. In this communication network there are two distinct parts such that all connections between them traverse a single switch.
This corresponds to the graph underlying the network containing a cut-vertex, whose removal separates the remainder of the graph into at least two distinct components.
Hence, in order to convince the manager, the network admin has to point out that the graph underlying the network can be separated by the removal of the vertex corresponding to the switch. So, they first layout the graph using a circular layout, which is a wide-spread all-purpose visualization style <cit.>, and point the manager to the fact that the red colored vertex is a cut-vertex; see <ref>.
Unfortunately, the circular layout does a poor job at highlighting the cut-vertex. While it is evident to the manager that there are a top and a bottom component connected by some edges, they explain that they are not sure if all connections between both components use the suggested cut-vertex or not. Hence, the network admin prepares a second drawing using a force-directed organic layout where the cut-vertex is clearly visible; see <ref>.
However, the engineer who designed the network becomes defensive and claims that there could be another edge hidden behind the alleged cut-vertex. This argument can be easily disproven by the network admin as they move the cut-vertex down, obtaining the drawing in <ref>. Presented with this new line of evidence, the engineer stops arguing and the manager agrees that the network has to be made more robust.
Unless specified otherwise, the layouts of all visualizations in this paper have been created by the authors.
Discussion.
This example illustrates how standard layout techniques may be unable to highlight even simple properties.
In the circular layout, it is not easy to verify even when the cut-vertex is highlighted; see <ref>.
This is due to the Gestalt principle of grouping <cit.>. Here, the initial perception is guided by continuity and closure of node positions, leading to the perception of a single circular component. As a second step, an observer may see two separate components with edges biasing perception due to connectedness grouping. Thus, the observer has to analyze the entire graph, going node-by-node, to negate the automatic perceptual grouping induced by the layout to verify that there is a cut-vertex.
The issue with the second illustration in <ref> is of different nature. Namely, the force-directed layout does a much better job at highlighting the cut-vertex. In fact, the observer discovers two dense salient features which are the two components separated by the cut-vertex and immediately notes that they are connected at a single vertex.
Nevertheless, if there is an overlapping edge behind the cut-vertex, the drawing may look the same, challenging the human observer to identify that the vertex is not a cut-vertex.
The drawing in <ref> avoids this problem by explicitly highlighting the cut-vertex via pre-attentively perceptable patterns (i. e., pop-out effects) <cit.>. The singular goal of highlighting the cut-vertex is achieved at the cost of traditionally accepted aesthetic metrics <cit.>, as — compared to the circular and force-directed layouts — the general layout is unbalanced, with many crossings and poor resolution; see Table <ref>.
Thus, visual certificates may not be useful in traditional exploratory applications, instead they focus on highlighting a specific property.
We remark that a cut-vertex proves non-2-connectivity and a similar approach can be used
to visually prove that a graph is not k-connected: there exists a set of k-1 vertices whose removal separates the graph and we can layout the graph so that all connections between two clearly separated parts run via this vertex set.
§.§ Example <ref>: The Graph is Connected
In Ex. <ref>, to convince the judge, the prosecution lawyer decides to visualize the network of criminals induced by the connections of provable money transfers. The prosecution lawyer draws it with a force-directed approach; see <ref>.
While <ref> shows that there are many connections in the graph, it does not emphasize that there is only a single connected component. Hence, the defense lawyer argues that the component containing their client may have been drawn on top of the component with all the convicted criminals. Hence, the prosecution lawyer has to improve their visual proof. To do so, they include a highlighted spanning tree that shows that every vertex can be reached from every other vertex; see <ref>.
Although the defense lawyer now has to admit that there is a smaller portion of the drawing to check, i. e., the highlighted edges, their argument stays more or less the same: that there are still crossings between edges of the spanning tree, which may be due to two different highlighted components drawn on top of each other. Thus, the prosecution lawyer creates a third drawing in which the spanning tree is crossing-free; see <ref>. Here the spanning tree is rooted at the central vertex and vertices are drawn on concentric circles depending on distance from the root.
Given this visualization, the defense rests, and the judge decides quickly that indeed all members of the network are affiliated.
[p]
Selected aesthetic metrics of the node-link drawings in this paper: stress st <cit.>, node resolution nr, Jaccard index ji <cit.>, edge-length ratio el <cit.>, crossing resolution cr <cit.>, crossing number cn, aspect ratio ar and angular resolution an <cit.>. Red numbers in parantheses give the corresponding values for subgraphs highlighted in red in the corresponding figure. For st and cn lower numbers are better, otherwise higher numbers are better.
blue!33 3cblue!25 <ref> 3cblue!33 <ref> 2cblue!25 <ref> blue!33 <ref> 3cblue!25 <ref> 2cblue!33 <ref>
-2*blue!33 Fig. [HTML]C0C0C0[fig:cutvertex:circular](a) [HTML]C0C0C0[fig:cutvertex:force-directed](b) [HTML]C0C0C0[fig:cutvertex:proof](c) 2c[HTML]B0B0B0[fig:st:bad](a) – [fig:st:middle](b) [HTML]B0B0B0[fig:st:good](c) [HTML]C0C0C0[fig:hamilton1](a) [HTML]C0C0C0[fig:hamilton2](b) [HTML]B0B0B0[fig:coloring4](d) [HTML]C0C0C0[fig:nonbipartite0](a) [HTML]C0C0C0[fig:nonbipartite2](b) [HTML]C0C0C0[fig:nonbipartite3](c) [HTML]B0B0B0[fig:noncomplete1](a) [HTML]B0B0B0[fig:noncomplete2](b)
[HTML]F7A6A6st 132.1 9.2 36.7 2c[HTML]E2E2E213.6 (31.3) 32.5 (19.6) 9.3 (51.9) 45.5 (12.6) 6635 567.6 (9.1) 651.0 (1.9) 1022.7 (3.3) 58.8 94.4
[HTML]F7A6A6cn 81 19 26 2c[HTML]E2E2E228 (8) 63 (0) 29 (4) 45 (0) 7452 6211 (2) 9493 (0) 9992 (0) 12649 12650
ji .274 .332 .283 2c[HTML]E2E2E2.330 (.132) .285 (.160) .407 (.167) .339 (.232) .037 .177 (.243) .180 (.361) .182 (.361) .918 .920
el .174 .333 .127 2c[HTML]E2E2E2.360 (.404) .149 (.382) .401 (.649) .185 (.542) .090 .020 (.161) .023 (.586) .013 (.472) .126 .047
nr .174 .104 .100 2c[HTML]E2E2E2.181 (.181) .119 (.119) .164 (.164) .184 (.184) .020 .016 (.123) .016 (.314) .011 (.307) .126 .047
ar .985 .774 .966 2c[HTML]E2E2E2.884 (.884) .796 (.796) .871 (.871) .987 (.987) 1 .941 (.606) .941 (.977) .994 (.994) .998 .898
cr 20.0 27.1 7.2 2c[HTML]E2E2E237.1 (37.1) 20.5 (N/A) 31.4 (48.2) 22.5 (N/A) 2.6 0.79 (73.2) 0.79 (N/A) 0.74 (N/A) 14.4 4.68
an 10.0 .20 .56 2c[HTML]E2E2E21.68 (20.0) 4.44 (21.4) 0.40 (15.93) 12.08 (141.0) 0.21 .017 (6.87) .017 (112.0) .007 (97.6) 7.20 1.23
Discussion.
While in Ex. <ref> we have seen that the drawing style of the entire graph can be important to visually prove a property, here we added another dimension. Namely, a subgraph is explicitly color-highlighted for pre-attentive perception.
In addition, the drawing of this subgraph was very important in creating a convincing argument. In <ref> the drawing of the spanning tree is not very readable. Thus, even with the attention drawn to this portion of the drawing, it remains time consuming to check that a single tree connects all vertices. But when the tree is laid out in a concise and readable fashion as in <ref>, it is quite evident that it spans all the vertices, as the colored edges induce automatic grouping via similarity <cit.> and act as guidance for attention spread <cit.>. Similar to <ref>, while the quality of the drawing of the spanning tree is improved, the drawing of the rest of the graph does not measure well on the usual metrics; see <ref>.
§.§ Example <ref>: The Graph has a Hamiltonian Cycle
We may train an AI to produce a good node-link drawing for <ref>. As in <ref>, we observe that the quality of the drawing of the evidence relevant to the property under consideration is more important than the drawing of the full graph (<ref>), thus we select a circular layout with the hamiltonian cycle forming the outer face (<ref>).
However, the human operator needs to check that all edges of the highlighted outer cycle are indeed present which can become increasingly difficult for larger graphs where resolution may become problematic. We can improve upon these issues by instead using an adjacency matrix representation. While an arbitrary permutation (see <ref>) does not provide any insights, an appropriate sorting of rows and columns makes the cycle composed of three components: one red diagonal and two red-cells (top-right and bottom-left); see <ref>.
Discussion.
We observed that different visualization paradigms may perform better or worse for visually proving a property. While the node-link drawing in <ref> already highlights the cycle well, the adjacency matrix representation in <ref> composes the Hamilton cycle in three components. The perception of the red diagonal is facilitated by figure-ground separation via connectedness and similarity <cit.>, and the two corner cells stand out due to both color difference and symmetry <cit.>. The particular advantage of this representation is that the used visual cues scale nicely even for very large matrices (up to the pixel resolution of the screen) <cit.>. Thus, an important criterion for judging the quality of a visual proof should be the workload required by the observer to evaluate the correctness. As checking for Hamiltonicity is a difficult task with an all-purpose visualization (see also <ref>), both visualizations should be regarded as valid visual certificates albeit of different quality.
§ RELATED THEORIES, FRAMEWORKS AND MODELS
§.§ Certifying Algorithms
The concept of visual certificates is related to certifying algorithms popularized by McConnell et al. <cit.>, which seek to provide short and easy-to-check certificates for the correctness of an algorithm.
Let f:X → Y be a computable, surjective function for input set X and output set Y and let W be a set of witnesses. Intuitively speaking, a witness describes a simple proof certifying that the output y of an algorithm for f on input x satisfies f(x)=y. The validity of a witness for a certain combination of inputs and outputs is assessed via the witness predicate 𝒲: X × Y × W →{,} that fulfills:
0em
* Witness property: Given (x,y,w) ∈ X × Y × W, it holds f(x)=y ⇔𝒲(x,y,w) =.
* Checkability: Given (x,y,w) ∈ X × Y × W, it is trivial to determine 𝒲(x,y,w).
* Simplicity: 𝒲(x,y,w) ⇒ f(x)=y has a simple proof.
An algorithm for f is now called certifying algorithm if for any input x ∈ X it computes the output y=f(x) ∈ Y and a witness w ∈ W such that 𝒲(x,y,w) =.
It is worth noting that Properties <ref> and <ref> of the witness predicate are vaguely formulated. McConnell et al. <cit.> suggest that Property <ref> can be formalized by requiring that there must be a decision algorithm for 𝒲 that runs in a certain time (such an algorithm is called a checker). On the other hand, they emphasize that Property <ref> is intentionally left subjective as it relies on what is
considered common knowledge. For examples of certifying algorithms, see <ref>.
§.§ Perception
Visual proofs are concerned with a design of visual evidence for an existence of a specific property, such as the presence of a cut-vertex, a Hamiltonian cycle, etc. In principle, a proof for such a property can be reduced to a program that returns a binary outcome, affirming or rejecting the claim. This may be sufficient for specialists who are familiar with the property itself, understand and trust the algorithm behind the code, and trust that the code is valid. However, such evidence may not be convincing to a non-specialist (a judge, a stockholder, etc.), particularly because the proof itself will be just one piece of evidence among many. Prior research shows that in such cases, presenting evidence per se is not enough, as information can be discounted as confusing, unimportant, or, given the wrong context, even misleading <cit.>, as the accessibility and clarity of evidence could be as important as evidence itself <cit.>.
Due to the diversity of graph properties there can be no general solution. Visual proof design might be guided by the principle of optimizing the data-ink ratio <cit.>. Thus, instead of optimizing overall aesthetic metrics <cit.>, one should minimize the required number of visual queries, i. e., attention orientation, driving eye movements, and pattern/object recognition <cit.>.
The human visual perception system consists of three stages: (1) rapid parallel processing involving billions of neurons, e. g., extraction of orientation, texture, color, and motion features; (2) slower processing than Stage 1, e. g., detection of 2D patterns, contours and regions; (3) slow serial processing, involving both working and long-term memory, e. g., object identification <cit.>.
As in Stage 1 the entire visual field is processed quickly in parallel, information that can be captured in this stage can be easily distinguished.
Thus, pre-attentive (pop-out) patterns such as color, size, orientation, shapes, etc.
should be utilized.
In other words, a good visual proof must ensure that a focal piece of evidence is a visual “pop-out” feature that automatically attracts viewer attention and that the visual layout is
parsed and grouped into patterns that express the evidence. In case of the former, studies on visual search provide a comprehensive list of useful pop-out features such as color, size, contrast, or location <cit.>. Regarding the latter, one can rely on a large body of literature on principles of perceptual organization, commonly known as Gestalt principles <cit.>. However, yet another constraint is placed by our working memory that limits the number of nodes, edges, and components that can realistically be assessed at any single time <cit.>.
The examples above illustrate the importance of this approach for visual proofs. For instance, consider the visual evidence for the existence of a cut-vertex in <ref>. While it uses color to attract the viewer’s attention to the cut-vertex and spatial arrangement to visually separate the two components, it still leads to an excessive number of visual queries, requiring multiple scans of individual vertices to ensure that they are connected only to the cut-vertex and the nodes within the component. In turn, there is a memory bottleneck that is likely to prevent a viewer from being completely certain about the validity of the proof. In contrast, in <ref> the graph layout groups the entire evidence in just three components and clearly shows lack of inter-component edges, so that very few visual queries are required to confirm the vertex is indeed a sole connector between the components.
In short, although there cannot be a single one-size-fits-all approach for constructing visual proofs, their critical role in aiding the cognition of the viewer means they should be built based on principles of perceptual organization and around the limitations of attention and memory <cit.>.
§.§ Computational Complexity
To evaluate the amount of the cognitive workload, we will apply concepts from complexity theory <cit.>.
It is also worth mentioning that the examples discussed so far differ in terms of their computational complexity. Namely, all cut-vertices of a graph and a spanning-tree can be found in time O(n+m) based on BFS traversals where n is the number of vertices and m the number of edges while determining a Hamiltonian cycle is NP-complete <cit.>. Thus, in <ref>, we have visually proven an algorithmically difficult to solve problem.
However, there may be graph properties that cannot be visually proven.
We first have to discuss how a human observer interacts with a visual certificate. In <ref>, the human observer identified two connected components and then saw that they can be separated by the removal of their shared vertex. Such a procedure could be seen as an O(1) time algorithm, where the observer determined that there is only a single point where both components touch. Similarly, in Examples <ref> and <ref>, the observer may have checked for every vertex if it was part of the highlighted structure. Even if they were to check this for every vertex one at a time, the resulting algorithm would still run in linear time. Hence,
an observer is actually performing a deterministic validation algorithm for establishing that a certificate is correct.
Now, consider the complementary question to <ref>, i. e., we want to determine whether a graph does not contain a Hamiltonian cycle. This is a CoNP-complete problem as it is the complement to an NP-complete problem. For CoNP-complete problems it is likely that there is no certificate that can be checked in polynomial time <cit.>, i. e., if we assume that a human observer deterministically analyzes a visualization (as could be recreated by computer vision), we have to assume that we cannot visually prove a CoNP-complete problem.
§.§ Related Visualization Models
Aside from graph visualizations, the concept of visually enhancing a proof is wide-spread. In mathematics, visual proofs for theorems have been used since ancient times <cit.> and there is a plethora of examples <cit.>. The question if such proofs can be regarded as such also has been discussed philosophically <cit.>. Also in computer science, visualizations are heavily used to convey knowledge, e. g., while not necessarily proving, an interactive sequential art by Bret Victor <cit.> beautifully explained an algorithm from a Nature paper <cit.>.
Overall, there is a trend of increasingly sophisticated models considering an holistic integration of visualization into the sensemaking process, typically with the goal of informing the design of interactive systems for data exploration. Early models considered a linear pipeline, from data, via various transformations, to a visual display <cit.>. Visual analytics seeks to apply visualization to support the entire human sense-making loop <cit.>. More recent models aim to connect sense-making from interactive data visualization, via hypothesis formation and testing, to knowledge generation <cit.>. An underlying theme across most of this work is the role of computational guidance in the analytics process, and how algorithms can support the various loops in the sensemaking process <cit.>. By contrast, we consider a different model to conceptualize the role of algorithms, and AI, in supporting data (specifically network data) understanding. Our model for visual proofs (Fig. <ref>) does not seek to replace the traditional sense-making/knowledge-generation loop, but to support humans in situations where the result of a complex algorithm or property needs to be explained and justified.
There are also models related to ours from information visualisation research.
Song et al. <cit.> considered a problem that may be seen as a complementary question to the one studied in this paper: They investigated how computer vision can understand network visualizations optimized for human users.
Wickham et al. <cit.> proposed a two-phase procedure to convince a human observer that a data set contains statistically significant difference from randomly generated data.
The human observer is first exposed to several randomly generated data sets (similar to a Rorschach test) before being exposed to a line-up consisting of the real data set and a couple randomly generated data sets. The first phase primes the human viewer for statistically insignificant variations so that, in the second phase, statistically significant differences clearly pop out from the noise.
Another related model are Gragnostics, which are ten features suggested by Gove <cit.>, that are fast to compute and provide a quantification of structural graph properties. In contrast to our model that aims to prove structural properties of graphs, Gragnostics provides the human user with a first impression of the structure of the graph at hand which may be helpful for initiating a thorough investigation. Finally, our model may also be seen as a visual communication of structural graph properties. Visual communication has been investigated in other settings for several decades, see e. g. <cit.>.
§ THE GRAPHTRIALS MODEL
We are now ready to discuss our formalization of visual proofs. For this, we first abstractly outline the process of visually proving properties of graphs in an adversarial setting using a model that we call GraphTrials; see also <ref>. The model includes three distinct roles that have already appeared in our discussion of <ref> in <ref>:
The prosecution lawyer must convince the judge that a certain assertion regarding a graph is true, the defense lawyer may raise doubts about the validity of the prosecution lawyer’s claims, and the judge will determine the truth of the assertion.
The roles prosecution lawyer, judge and defense lawyer are to be seen as abstract descriptions of the different actors in the process; e. g., in Ex. <ref> and <ref>, the prosecution lawyers were the network admin and the AI based algorithm, respectively. The latter example further indicates that not all roles have to be assigned to a human. In fact, we only require that the judge corresponds to the human audience of the visual certificate whereas each lawyer may be either human, software or a human assisted by software. Moreover, as we have seen in <ref>, it can also occur that a critical audience can act as both the judge and defense lawyer roles simultaneously.
To convince the judge of a valid assertion f for the input graph G, the prosecution lawyer draws a visual certificate W(G). To do so, they first analyze the raw data G to reveal evidence that proves the assertion f.
The evidence is then embedded in W(G): a visual representation of G that in some way emphasizes the evidence.
Note that in the scope of our model we treat the analysis of the raw data and extraction of the evidence as a black box, i. e., we may assume that the prosecution lawyer already knows that the assertion f is true for the input graph G and may also be given the evidence as input. This allows us to efficiently visually prove algorithmically difficult assertions (such as the existence of a Hamiltonian cycle as in <ref>) and to ignore how the evidence is gathered (either algorithmically or by human interaction) in our model. The latter aspect also provides the possibility to separate the evidence gathering from the visualization process W, i. e., W could be a reusable program that embeds the evidence according to a specification[The examples in <ref> and <ref> both use visual certificates that highlight cycles.].
The defense lawyer checks the unimpeachability of W(G) as a visual representation of G certifying f(G). Thus, they may question whether the graph represented in the visualization actually corresponds to the input and they may also raise concerns if W(G) is not distinguishable from a slightly different non-certificate (e. g., in <ref> we encountered the case where an edge may have been hidden making it invisible to the judge's perception).
The judge, the human audience of the visual certificate W(G), will validate the claim f(G) using W(G). In this step, the visual certificate W(G) must guide the judge's perception so that they are able to form a mental model ℳ(G) that facilitates confirmation of the validity of the assertion f(G). For instance, the guidance can be formed by a suitable choice of topology which leads the judge to identify clusters of the graph as distinct salient features (as in <ref>) or by adding additional features such as color to draw attention to certain parts of the graph (as in <ref>). We discuss the judges mental model in <ref>.
It is noteworthy that aside from the input graph and the verdict of the judge, the only information shared by all three roles is the visual certificate W(G). In particular, it is the only medium that can be used by the prosecution lawyer to communicate the gathered evidence to the judge, i. e., the evidence is hidden information only accessible by the prosecution lawyer. Similarly, the judge is not communicating its mental model ℳ(G) to the prosecution or defense lawyer, yet as we discussed above both roles might want to estimate what the mental model will look like. Furthermore, the nature of the mental model plays an important role in the validation step performed by the judge. Namely, the cognitive load put on the judge in this step depends hugely on how complex ℳ(G) is.
Finally, the defense lawyer's checking for unimpeachability is a process that is independent of the judge and prosecution lawyer and for a faithful and readable visual certificate we demand that there is no reason for the defense lawyer to raise doubts to the judge. As a result, there are several properties that we require from a visualization in order to call it a visual certificate and it could occur that an assertion cannot be visually proven for every graph for which the assertion is true (for instance we discussed issues related to scalability in <ref>). To this end, we also state when we want to say that a certain assertion can be visually proven for arbitrary graphs.
§.§ Visual Certificates and Visual Provability
We give formal requirements
inspired by the concept of certifying algorithms discussed in <ref>.
Let f: 𝒢→{,} be an assertion function for the set of graphs 𝒢, i. e., for some graphs the assertion f(G) is while for others it is not. For instance, if f is the existence of a cut-vertex, some graphs do contain one (f(G)=) while others do not (f(G)=). Consider a graph G with f(G)= and let W(G) be a visualization of G. We call W(G) visual certificate for f(G) if and only if the following hold:
0em
* Unimpeachability:
We call W(G) unimpeachable, if it satisfies the following two properties. First, W(G) should provide information faithfulness <cit.>, i. e., it displays the ground truth properties and structures in G. Second, W(G) should provide task readablility <cit.>, i. e., the judge can perceive enough information for validating the assertion.
* Checkability: Given W(G), it is trivial to decide that f(G)=. In particular, this means that the judge's perception leads to the formation of a mental model ℳ(G) that makes it possible for the judge to efficiently validate the assertion. The number of distinct observations made by the judge in the process is called the perceptual complexity.
* Simplicity: Given ℳ(G), there is a simple formal proof for f(G)= that relies solely on conclusions that the judge may deduce using ℳ(G). In particular, this means that W(G) is perceptually distinguishable from any possible wrong visual certificate W'(G).
If a visual certificate W(G) exists for each G ∈𝒢 with f(G)=, we call f visually provable. Note that the complementary function f^c (which is if and only if f(G)=) needs not necessarily be visually provable. For instance, we were able to visually prove the assertion that G contains a Hamiltonian cycle in <ref> but we argued that the absence of such a cycle cannot be visually proven in <ref>. This and requiring unimpeachability are clear differences to the concept of certifying algorithms whereas checkability and simplicity occur in both models, here considering the perceptual abilities of the judge; see also <ref>.
We are also interested in how efficiently the judge is able to validate f(G) = based on ℳ(G). To this end, we define the perceptual complexity as the time that the judge needs to check the assertion given ℳ(G). The perceptual complexity may depend on the size of the graph, however, in some scenarios (e.g. <ref>) it may be independent of it. Since we assume the judge to make an objective judgment based on the evidence, we can treat the thought process as a deterministic algorithm and apply methods from complexity theory to evaluate the perceptual complexity.
See <ref> for an application of these concepts.
§ VISUAL PROOFS FOR GRAPH PROPERTIES
We provide visual proofs for further widely used assertions.
For a summary of our discussion, refer to <ref>. In addition, we discuss further assertions in <ref>.
(Non)-Bipartiteness and k-colorability.
We can use a matrix representation to visually prove bipartiteness; see <ref>.
When sorting the rows and columns according to the two independent subsets, bipartiteness can be simply checked by verifying if the two empty squares are indeed empty <cit.>. This approach also generalizes to k-colorability as shown in <ref> for 4 colors, however, for sparse graphs like in <ref> additional highlighting of the (supposedly) empty squares might be necessary.
For small graphs, a node-link diagram
might be easier to read and hence preferable, however the approach does not scale well due to resolution since the judge needs to verify there are no edges within the subsets; see <ref>.
An odd-length cycle certifies that a graph is not bipartite, so non-bipartiteness can be visually proven by highlighting a shortest odd cycle in a drawing. In an arbitrary drawing, the cycle may be hard to spot, see <ref>.
Redrawing the cycle in convex position makes it easier to read (see <ref>), especially if it is the convex outer cycle; see <ref> (this makes the rest of the graph harder to read; see <ref>).
The cycle is now clearly visible and the judge just needs to assert oddness. While depending on the odd cycle length counting may be inevitable, the judge can use the symmetry of the drawing of the cycle to see that the cycle is odd (e.g., in <ref>, there is a single top-most but no single bottom-most vertex). For larger cycle lengths, an adjacency matrix representation may be beneficial:
Sort the rows and columns along the odd cycle and mark it, then append the remaining vertices arbitarily.
Then, alter the spacing of the matrix so that even rows and columns are thicker than odd ones; see <ref>.
The cell closing the cycle is a square if and only if the length is odd.
Completeness and Non-Completeness.
Non-completeness is evidenced by a single missing edge and can be visually proven with a circular layout with the missing edge on the outer cycle. This approach does not scale well for a larger graphs; see <ref>.
Readability and scalability can be improved by drawing focus to the missing edge, see <ref>.
However, one can also use a matrix representation (see <ref>) since spotting a missing square scales well from a perception perspective <cit.>. This technique can also prove completeness.
§ LIMITATIONS OF THE GRAPHTRIALS MODEL
Scalability.
In the GraphTrials model, we must not only visualize the evidence represented in the visual certificate, but also display the remainder of the graph faithfully. This may result in higher computational complexity compared to other visualization techniques, e. g., force-directed graph layouts, whose purpose is to create an overall readable representation.
Why not forgo visualization completely and use an assertion software to validate the evidence computationally? While this could drastically reduce the computation time and require fewer software components, there are in fact real-world application scenarios, e. g., in court, where it may be better to show a visual certificate accompanied by a short explanation why the certificate is indeed establishing the assertion instead of simply telling the audience that a piece of software analyzed the network and found the evidence for the assertion; see <ref>. Another benefit of visual proofs over a non-visual assertion software is that bugs in the visual proof pipeline can be spotted in the visual certificate, i. e., either the represented graph is not the input graph or the evidence is not a true evidence for the claim.
Another scalability issue is to display the entire graph faithfully. In <ref>, we assumed that the visual certificate may be represented by few components in the judge's mental model and that the formation of that mental model can be mainly guided by usage of bottom-up and pattern recognition processes. For large input graphs, the screen resolution might not permit an information-faithful representation of the input graph so that one must resort to techniques for displaying larger data, e. g., zooming. The introduction of such modes of user interaction may be problematic for our model as it may lead the judge to increasingly use top-down processes of perception which may influence the formation of the mental model.
Human factors.
In our model, the judge is necessarily a human actor in the visual proof process. Hence, it is no surprise that human factors play an important role in the application of our model. Our model assumes that the judge is able to draw objective conclusions provided the evidence by the prosecution lawyer. This process may be hindered by insufficient background knowledge of the judge or subjective expectations towards the visualization. Moreover, the judge's mental model cannot be directly analyzed and influenced introducing uncertainty into the model. We discuss these aspects further in <ref>.
§ OPEN PROBLEMS
* Are visual proofs in fact scalable? How do they extend to geospatial and dynamic graphs where the data are expected to obey spatial and/or temporal constraints?
* Which features contribute to perceptual complexity?
* Do response times depend mostly on perceptual complexity?
* When do human users regard a visual certificate as unimpeachable?
* What are human limits for the perception of graph properties? For instance, the minimum perceivable slope difference is ≈ 2 degrees <cit.>.
* What is the trade-off between perceptual complexity and cognitive load?
abbrv-doi-hyperref
§ OMMITED MATERIAL FROM SECTION <REF>
§.§ Examples for Certifying Algorithms
As an example,
reconsider <ref>. Here, our function f takes as an input a graph G and should output either if the graph is connected or otherwise. A witness for a positive instance would be a spanning tree T as discussed in <ref>. Given T, we can easily see that G is connected and we can determine this by checking that every vertex is part of T; i. e., Property <ref> holds. In fact, this checking can be done efficiently by running a BFS on T, that is, the time for checking the correctness of the certificate depends only on the number of vertices, not on the number of edges; i. e., Property <ref> holds.
On the other hand, if G is not connected, the certifying algorithm for f can provide a partition of the vertices into two disconnected sets V_1 and V_2 as a witness. We may check that we cannot reach any vertex in V_2 if we run a BFS starting from a vertex in V_1 to establish that f was computed correctly; i. e., Property <ref> holds. If it is the case that |V_1| ≤ |V_2| we end up considering at most half of the vertices in this process, hence we may argue that Property <ref> holds.
Since we found a good witness for positive and negative instances, we conclude that Property <ref> holds.
While Ex. <ref> also admits a certifying algorithm,
for Ex. <ref>, two issues emerge: Computing a Hamiltonian cycle is NP-complete, hence, the algorithm for f may not run in polynomial time. However, even if we ignore this, we end up with the problem, that for a negative instance no easy witness is known, i. e., we do not even know how to guarantee Property <ref>. This appears to be a general issue for NP-complete assertions as a witness for a negative instance would be a short certificate for a CoNP-complete problem.
§ OMITTED MATERIAL FROM SECTION <REF>
§.§ Application of Visual Certificate and Provability Properties to Sections <ref>, <ref> and <ref>
Reconsider Ex. <ref> to <ref>. All drawings shown in <ref> have been unimpeachable (Property <ref>), possibly with a single exception: In the drawing in <ref>, one may argue that it is impossible for the judge to perceive whether there is an edge hidden behind the alleged cut-vertex questioning the unimpeachability of the layout. While in <ref> it is even more difficult to follow all edges, it makes it easy to perceive all parts of the graph necessary for the validation.
On the other hand, the drawing in <ref> also fails the simplicity requirement (Property <ref>). Namely, one could actually hide an edge behind the cut-vertex that connects the alleged left and right component. Most likely, ℳ(G) would consist of two connected components connected at a single vertex, as the drawing shows two vaguely compact salient features touching at a single point. Hence, if we showed the judge two drawings of that type where in one of the drawings an edge was hidden behind the cut-vertex, the judge would be unable to distinguish the correct certificate from the wrong one. In contrast, the drawing in <ref> circumvents this problem, as the cut-vertex is the only vertex at the bottom of the drawing. Similarly, the highlighted parts in <ref> draw the judge's attention towards them, hence, these structures would become part of the mental model. Given the highlighted parts, it is easy to prove connectivity or the existence of the Hamiltonian cycle.
Finally, for the drawings in <ref> it is not trivial to check whether the assertion is correct (Property <ref>). The mental model ℳ(G) will consist of a tangle of highlighted edges embedded in an even larger tangle of edges. While the judge will most likely succeed to establish that the highlighted parts of the graph form a tree or a cycle, they might have to perform a BFS. Hence, especially in the case of <ref>, this would be as efficient as computing the validity of the assertion from scratch and these visualizations are not to be considered visual proofs. On the other hand, the visualizations in <ref> lead to formation of mental models that highlight the tree or cycle in few components of the mental model. Hence the judge only needs to check if the mental model includes any vertices not belonging to the tree or cycle.
Moreover, in <ref> the judge also has to check that the salient feature representing the cycle in fact contains all edges. This can be easily done in <ref>, however, if the size of the graph was larger, the process would be more efficient for the visualization in <ref> where it is sufficient to check that the long diagonal and the two single cells are present. On the other hand, for a visualization in the style of <ref>, it is required to assert the existence of each presumed edge along the cycle. Thus, more components of the mental model must be checked and the perceptual complexity, that is, the time that the judge needs to check the assertion given the mental model, depends on the size of the graph.
§.§ Unimpeachability and the Defense Lawyer
Recall that the task of the defense lawyer is to establish unimpeachability (Property <ref>), i. e.,
we require that the visualization should display the ground truth properties of the graph, and that the judge can clearly perceive the parts of the visualization required for validating the assertion. The latter property is required so that the judge can extract the evidence embedded in the visual certificate whereas the former property ensures that a non-certificate which showcases evidence embedded in a visualization of a graph that is not identical to the input can be detected as such.
In the literature, these concepts are known as information faithfulness and task readability, respectively <cit.>.
Faithfulness refers to whether a visualization of a graph displays its ground truth properties and structures in a logically consistent manner, and the readability refers to the perceptual and cognitive interpretation of the visualization by the viewer <cit.>.
More specifically, information faithfulness means that all the information about a graph G is displayed in the visualization.
Faithfulness metrics are defined based on the type of ground truth structures of graphs, such as shape <cit.>, cluster <cit.>, symmetry <cit.> and change <cit.> and can be appropriately selected depending on the application scenario.
Similarly, task readability means that the user can perceive enough information from the visualization to correctly perform the task, here validating the assertion.
In our examples for good visual certificates, task readability was improved at the cost of information readability compared to standard layouts; see <ref>.
Our definition of unimpeachability is purposely quite objective so that it can be discussed efficiently when establishing new visual proof techniques. A purely logically acting defense lawyer should have no reason to raise doubts about the visual certificate if we obey these requirements and no feedback between defense lawyer and judge is necessary (hence the dashed connection in Fig. <ref>).
§.§ The Mental Model
The mental model ℳ(G) formed by the judge is an important component in assessing the usefulness of a visual certificate. Since cognition <cit.> and perception <cit.> differ from user to user, we must predict the expected mental model ℳ(G) instead of assessing the mental model of a specific user.
Hence, to influence the mental model, we have to carefully design visual certificates so to exploit known features of visual perception: In our analysis of <ref> in <ref>, we relied on the effect of salient features being automatically grouped and perceived as cohesive components <cit.>. Thus, we have good reason to assume that a human observer would see two components glued together at a vertex located below both components. In <ref>, we exploited the fact that salient red-colored components naturally draw the attention of the user towards them <cit.>, so that they form a distinct shape in the foreground with the rest of the graph in the background. Thus, the graph layout guides the judge's perception and simplifies the analysis.
Another important aspect regarding the judges mental model is that we consider the judge to make an objective judgment based on the evidence encoded in the mental model. Thus, in our model the judge is not influenced by any prior knowledge or hypotheses regarding the data but at the same time will only accept the assertion if it has become irrefutable.
§ VISUAL PROOFS FOR ADDITIONAL GRAPH PROPERTIES
In this appendix, we consider domain-specific properties from graph drawing and network analysis as well as assertions which are difficult to visually prove. For an overview of the results, see <ref>.
§.§ Visual Proofs using Canonical Representations
We discuss some graph properties that have historically been associated with certain representations that can serve as visual certificates.
Planarity and Outer-Planarity.
Planar and outer-planar graphs are important as real-world networks (e. g., road-networks) are often close to planar and certain problems admit efficient solutions for planar graphs but not in general. The defining characteristic of these graph classes is that they admit a crossing-free drawing (for outer-planarity, every vertex is located on the outer face). There is a plethora of drawing algorithms for these graphs that can be used to create a visual certificate <cit.>. Scalability can be problematic
and computing drawings really highlighting planarity is an intriguing open question.
Stack and Queue Number.
Stack and queue number are
graph parameters arising for instance in the context of scheduling and VLSI layout (see <cit.>). If a graph has a bounded stack (queue) number it admits a stack layout (queue layout, resp.) which uses few pages, i. e., layers in which the edges are drawn. On each page
in a stack layout, no edges cross while in a queue layout, no edges nest. These features can be easily detected, especially if one shows each page separately; i. e., a stack (queue) layout with k pages visually proves that the stack (queue) number is at most k. On the other hand, for this kind of assertion, user interaction may be required.
§.§ Assertions that are Challenging to Visually Prove
k-Connectivity.
Visually proving that a graph is k-connected is much more difficult compared to the complementary question discussed in <ref>.
Specifically, the prosecution lawyer needs to highlight a sparsification G' (i. e., a k-connected spanning subgraph) of G which is k-connected, similar to the visual proof in <ref>.
We can utilize the Nagamochi-Ibaraki algorithm <cit.>, which computes a k-connected spanning subgraph G' with O(kn) edges in linear time.
Therefore, the judge may verify k-connectivity in O(kn) time using G', which is much faster for dense graphs with O(n^2) edges.
However, for highly connected and large graphs (i. e., large values of k and n), the perceptional complexity
will be quickly increasing, making the design of effective visual proofs a significantly challenging problem.
For small values of k, say k=2, one may able to design more effective visual proofs by highlighting the structure of the sparsification better.
Note that the problems of determining planar <cit.> or minimum sized <cit.> 2-connected spanning subgraphs are NP-complete while there is a PTAS for the latter problem <cit.>.
Non-Planarity and Non-Outerplanarity.
It is well known that a graph is non-planar if and only if it contains either a K_5 or K_3,3 minor <cit.> while it is non-outer-planar if and only if it contains either a K_4 or K_2,3 minor <cit.>. Namely, a minor is a graph that can be obtained from the initial graph by a series of edge contractions, i. e., one identifies both ends of an edge. As each vertex in the minor can correspond to large subgraphs of the initial drawing, it may be infeasible to visualize them in an easy-to-grasp fashion in a static layout. Thus, one may want to animate the edge contraction sequence. We leave it as an interesting open question whether such a visualization would be a convincing visual proof for non-planarity.
Parameterized and CoNP-hard Assertions.
Other interesting variants of some problems presented in <ref> are parameterized assertions. That is, instead of asking for a Hamiltonian cycle, we may ask for the existence of a cycle of length k. Similarly,
we could be interested to prove that there is
an independent set, a clique or a dominating set of size k. Such assertions can be easily visually proven, however, the perceptual complexity depends on the value k as the judge has to count the size of the certificate.
The complementary questions for some of the presented problems are CoNP-complete; e. g.,
showing the absence of a Hamiltonian cycle, proving that the queue (stack) number of a graph is at least k, and showing that a graph is not k-colorable.
This is in line with our conjecture that CoNP-complete problems admit no visual proofs.
§.§ Visual Proofs for Network Analysis
So far, we have mainly discussed how to visually prove assertions stemming from graph theory. In network analysis, other analysis are equally important.
We now discuss how we can visually prove assertions in this domain.
Diameter and Center.
The diameter of a graph is the length of the longest shortest path between any two vertices in a graph, and the center of a graph is a vertex whose distance (i. e., shortest path) to all other vertices in the graph is minimized. Both are fundamental properties of graphs based on the distance between the vertices.
A visual proof of the diameter can be obtained by the small multiples of the level drawings of the BFS trees rooted at each vertex v, where the BFS tree with the maximum depth is highlighted as the diameter of the graph (in fact this single BFS tree is sufficient for a lower bound for the diameter).
Such a level drawing of the BFS tree rooted at a vertex v can be also used as a visual proof for vertices whose distance from v is exactly k, by highlighting the vertices on level k.
Similarly, it can be used for a visual proof for the graph center problem,
however this is more involved due to the comparison of the sum of all distances from v to other vertices.
The perceptual complexity of these visual proofs increases with size and density.
Other Network Analysis Measures.
There is a plethora of important measures in network analysis, e. g. centrality, k-core and automorphic equivalence <cit.>. The former two can be computed in polynomial time, while the latter is Isomorphism-complete.
Possible visual proofs for these measures can be a level drawing or a radial drawing, where the level or concentric circle is defined based on the corresponding measures, such as centrality values or k-core index.
Moreover, the k-core analysis can be visualized by a topographic map-style inclusion drawing, which may serve as a visual certificate.
Note that these visual proofs require a careful analysis by the judge resulting in linear perceptual complexity, that is, novel visual certificates that the judge can validate more efficiently are of interest.
§ LIMITATIONS RELATED TO HUMAN FACTORS
Background Knowledge Required for Checkability.
In practical settings, checkability, i. e., the judge's ability to check the validity of an assertion efficiently based on their mental model, also depends on the background knowledge of the judge. For example, in <ref>, we required knowledge of the terms cut-vertex, connectivity and Hamiltonian cycle.
Thus, we may be tempted to require the judge to know basic notions of graph theory, specifically knowledge of the properties for which the visual proof is exhibited. In addition to graph-theoretic background knowledge, familiarity with the chosen visualization style may be important. For instance, we have seen in <ref>
that less frequently used paradigms such as adjacency matrix visualizations may allow for very efficient visual certificates.
These aspects are limitations, when the audience is in fact non-expert and some required background knowledge is lacking.
If so, additional explanations may also be presented to the audience examining the visual certificate, similar to court proceedings; see also <ref>.
Subjective Aspects of Unimpeachability.
In <ref>, we introduced unimpeachability as a combination of information faithfulness and task readability. As a consequence, the defense lawyer has to check properties that can either be objectively fulfilled or violated. Thus, in its most simple form, the defense lawyer can be a computer program that checks whether the visualized graph is actually the input graph (such as in <ref>) and the judge can rely on the fact that the defense lawyer would have rejected the drawing if it was not unimpeachable. Moreover, such a defense lawyer software may be implemented once for a certain drawing style and reused for other visual proof processes utilizing the same layout technique.
We believe that our objective definition of unimpeachability works well in scenarios where we simply want to check that the drawing method worked correctly (as in <ref>) or where the judge has no reason to distrust the prosecution lawyer (as in <ref>).
However, the requirements for unimpeachability are context dependent and a visual proof may need to be defended against subjective counterarguments, brought forth by an adversary (as in <ref>) or by the skepticism of the judge (i. e., a single person fulfills both roles judge and defense lawyer). In such scenarios, there may be further subjective aspects of unimpeachability, e. g., the judge may expect the visualization to be similar to an already known layout of the graph, as otherwise the defense lawyer may question that the visual certificate shows the graph in question, i. e., there is feedback from the defense lawyer to the judge; see the dashed connection
in <ref>.
Uncertainty related to the Mental Model.
The judge's mental model is an abstraction that is difficult to describe as it can vary from judge to judge due to differences in both cognition <cit.> and perception <cit.>. Further, it may even be uncertain to the judge how they abstract the visualization.
Thus, there
are important open questions related to the judge's mental model: It is important to understand how visualization techniques can accurately influence the abstraction of the judge so that the evidence gathered by the prosecution lawyer can be translated into the mental model as unmodified as possible. Gathering empirical evidence that our concept of perceptual complexity captures the reality, i. e., that judges indeed use the mental model to establish the verdict, is an intriguing open problem. Moreover, we may ask which perceptual complexity is still accepted by human judges (linear workload may already be overwhelming). Finally, it may be worth investigating alternative measures for perceptual complexity that are agnostic to whether or not a human judge actually follows a deterministic algorithm. Such measures may be similar to the ones used in predictive models in HCI such as KLM <cit.> or GOMS <cit.>.
|
http://arxiv.org/abs/2409.02567v1 | 20240904093509 | Evaluation Study on SAM 2 for Class-agnostic Instance-level Segmentation | [
"Tiantian Zhang",
"Zhangjun Zhou",
"Jialun Pei"
] | cs.CV | [
"cs.CV"
] |
top=0.7in, bottom=0.7in, left=0.6in, right=0.6in
i.e.
e.g.
etc
et al.
Pei et al.
§ ABSTRACT
Segment Anything Model (SAM) has demonstrated powerful zero-shot segmentation performance in natural scenes. The recently released Segment Anything Model 2 (SAM2) has further heightened researchers' expectations towards image segmentation capabilities. To evaluate the performance of SAM2 on class-agnostic instance-level segmentation tasks, we adopt different prompt strategies for SAM2 to cope with instance-level tasks for three relevant scenarios: Salient Instance Segmentation (SIS), Camouflaged Instance Segmentation (CIS), and Shadow Instance Detection (SID). In addition, to further explore the effectiveness of SAM2 in segmenting granular object structures, we also conduct detailed tests on the high-resolution Dichotomous Image Segmentation (DIS) benchmark to assess the fine-grained segmentation capability. Qualitative and quantitative experimental results indicate that the performance of SAM2 varies significantly across different scenarios.
Besides, SAM2 is not particularly sensitive to segmenting high-resolution fine details.
We hope this technique report can drive the emergence of SAM2-based adapters, aiming to enhance the performance ceiling of large vision models on class-agnostic instance segmentation tasks.
Project link: <https://github.com/PJLallen/InstanceSAM2Eval>.
Foundation Model, Large Vision Model, SAM 2, Instance-level Segmentation, Dichotomous Image Segmentation.
Evaluation Study on SAM 2 for Class-agnostic
Instance-level Segmentation
Tiantian Zhang, Zhangjun Zhou,
Jialun Pei^†
Jialun Pei is with the Department of Computer Science and Engineering, The Chinese University of Hong Kong, HKSAR, China.
Tiantian Zhang is with the Department of Computer Science and Engineering, The Chinese University of Hong Kong, HKSAR, China.
Zhangjun Zhou is with the School of Software Engineering, Huazhong University of Science and Technology, Wuhan, China.
^† Corresponding author: Jialun Pei (Email: [email protected]).
September 9, 2024
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The advent of large foundation models, including ChatGPT, GPT-4, and LLaMA, has revolutionized the artificial intelligence (AI) landscape. Powered by extensive datasets, these models excel in multi-modal processing, , language, image, video, and audio, showcasing substantial progress in AI capabilities. Building on these developments, the Segment Anything Model (SAM) <cit.> stands out breakthrough in scene segmentation with large vision models.
The generality and Adaptability of SAM highlight its potential for understanding complex scenarios and targets, further expanding the frontiers of image segmentation tasks.
SAM allows users to input custom prompts, such as points or bounding boxes, resulting in highly accurate segmentation masks. This adaptability enables SAM to perform a wide range of image segmentation tasks. More recently, the release of SAM2 <cit.> further overcomes the limitation that SAM does not handle video content well. In the field of image segmentation, SAM2 has shown improvement in segmentation accuracy and inference efficiency [<https://sam2.metademolab.com/demo>]. A variety of evaluations have recently emerged to examine the segmentation performance of SAM2 in different scenarios <cit.>. For instance, Lian <cit.> assessed its instance segmentation performance in underwater environments, while Yan <cit.> explored its effectiveness in endoscopic and microscopic images. Additionally, Ma <cit.> conducted a comprehensive benchmark of SAM2 across 11 medical image modalities and videos, highlighting its strengths and weaknesses compared to SAM and MedSAM. Furthermore, Tang <cit.> compared SAM2 and SAM on the camouflage object detection benchmark. This research found that SAM2 significantly degrades the performance of SAM2 compared to SAM for detecting camouflaged objects when no prompts are provided, and significantly improves its performance over SAM when segmentation prompts are available. These interesting findings raise curiosity about SAM2's performance in class-agnostic instance-level segmentation tasks.
In this paper, we evaluate the performance of SAM2 in class-agnostic instance-level segmentation tasks, focusing on three distinct scenarios: Salient Instance Segmentation (SIS) <cit.>, Camouflaged Instance Segmentation (CIS) <cit.>, and Shadow Instance Detection (SID) <cit.>. Moreover, We also thoroughly evaluate SAM2 on the high-resolution dichotomous image segmentation (DIS) benchmark <cit.> to analyze its ability to segment granular target structures. We compare SAM2 with SAM and well-known specific models on multiple benchmarks. Based on extensive experimental results, we summarize the following conclusions:
* SAM2 outperforms task-specific methods for CIS and SIS when using bounding boxes as prompt inputs. However, the performance of SAM2 drops remarkably without box prompts, especially for camouflaged instances.
* SAM2 performs poorly on the DIS task, whether or not it uses bounding boxes as prompts. It indicates that SAM2 is not well suited for fine-grained segmentation of complex object structures.
* For the SID task, while SAM2 performs well in segmenting instances, it struggles with shadow matching.
* SAM2 with fewer parameters achieves superior results compared to SAM across four tasks when using bounding boxes as prompts. In contrast, SAM2 without box prompts performs inferior to SAM for SIS, CIS, and SID.
§ EXPERIMENTS
This section provides the guidelines and details of our basic and extensive experiments, , datasets, the evaluation protocol, implementation settings, and the quantitative and qualitative results of SAM2 on four tasks.
§.§ Datasets
In line with <cit.>, we utilize the ILSO <cit.>, SOCK <cit.>, SIS10K <cit.>, and SIP <cit.> datasets for the SIS task. For the CIS task, we employ COD10K <cit.> and NC4K <cit.> to evaluate the performance. For the SID task, we use the SOBA-challenge and SOBA-test datasets <cit.>. For DIS, We conduct experiments on DIS5K <cit.>, including DIS-VD and DIS-TE. DIS-TE is further divided
into four subsets , DIS-TE1, DIS-TE2, DIS-TE3, and DIS-TE4, representing four levels of testing difficulty.
The number of test samples for datasets in each task is summarised below:
* SIS: ILSO: 300; SOC: 600; SIS10K: 1,170; SIP: 929.
* CIS: COD10K: 2,026; NC4K: 4,121.
* SID: SOBA-challenge: 100; SOBA-test: 160.
* DIS: DIS-VD: 470; DIS-TE1: 500; DIS-TE2: 500; DIS-TE3: 500; DIS-TE4: 500; Overall DIS-TE: 2,000.
§.§ Evaluation Protocol
To evaluate camouflaged instance segmentation, we employ COCO-style evaluation metrics, including AP_50, AP_75, and AP values. For salient instance segmentation, we adopt the AP_70 metric, which is commonly used in related literatures <cit.>, instead of the AP_75 metric. In shadow instance segmentation, while task-specific methods employ the SOAP metric to assess object and shadow matching, SAM2 do not involve this matching mechanism. In this regard, we focus only on performance with the instance AP metric.
To assess high-accuracy DIS, we employ six evaluation metrics to evaluate SAM2, SAM, and DIS-specific models, including maximal F-measure (F_β^max↑) <cit.>, weighted F-measure (F_β^ω↑) <cit.>, Mean Absolute Error (MAE, M↓) <cit.>, Structural measure (S-measure, S_α↑) <cit.>, mean Enhanced alignment measure (E-measure, E_ϕ^m↑) <cit.>, and Human Correction Efforts (HCE_γ↓) <cit.>.
§.§ Implementation Details
To ensure a fair comparison, we use the original official code of SAM2 and SAM to test on different datasets. Both SAM2 and SAM are evaluated under two settings: automatic mode and ground truth bounding box (GT-Bbox) mode. In automatic mode, we use the default setting of a 32×32 point prompt for both. In GT-Bbox mode, the ground truth bounding box serves as the box prompt input. All parameters remain at their default settings, and the input images are resized to 1024×1024. Furthermore, we use different backbones for SAM and SAM2. For SAM, we use ViT-Base, ViT-Large, and ViT-Huge. For SAM2, we use Hiera-Tiny, Hiera-Base+, and Hiera-Large.
All experiments are implemented with a single Tesla A40 GPU.
§.§ Results
§.§.§ Salient Instance Segmentation
Quantitative Results.
The quantitative results of salient instance segmentation are presented in table:SIS. On ILSO and SIS10K datasets, SAM2 models generally outperform in the GT-Bbox setting. For example, SAM2-L achieves an AP score of 82.2 on ILSO, slightly higher than 79.2 of SAM-H.
However, in the automatic setting, SAM2-L scores lower, with an AP of 49.1 versus SAM-H's 72.2.
A similar trend is observed on SIS10K, with SAM2-L reaching 45.2 compared to SAM-H's 68.4.
On SOC and SIP datasets, SAM2 models also excel in the GT-Bbox setting, with SAM2-L scoring 83.1 on SOC and 93.4 on SIP.
Compared to specific methods like SCNet, S4Net, RDPNet, and OQTR, SAM2 models often achieve higher AP scores in the GT-Bbox setting.
This indicates that SAM2 models show significant improvements with ground truth boxes, outperforming traditional methods in certain scenarios.
However, it should be noted that SAM2's bounding box mode relies on inputting the ground truth instance locations, which might introduce a slight unfair compared to other frameworks.
Qualitative Results.
As shown in fig:SIS, in the qualitative analysis of salient instance segmentation, both SAM-Auto and SAM2-Auto perform global segmentation because they do not specify particular objects to segment.
The segmentation quality of SAM is slightly better, likely attributed to SAM using a larger version (huge) compared to large version of SAM2.
This difference in model size may account for the finer details in segmentation masks of SAM, which, though they still appear somewhat fragmented.
Nonetheless, when bounding box prompts are adopted, both SAM-bbox and SAM2-bbox achieve significantly improved and precise segmentation, highlighting the value of guided segmentation.
§.§.§ Camouflaged Instance Segmentation
Quantitative Results.
table:CIS shows the segmentation performance of SAM2 on camouflaged instances, which are more difficult to segment than salient instances in table:SIS.
In automatic mode, SAM2's performance is comparable to the unsupervised methods of task-specific algorithms and falls short of SAM, likely due to differences in parameter counts.
However, with box prompts, the performance of SAM2 improves dramatically.
Specifically, on COD10K test set, the AP jumps from 10.6 to 68.8 with a large backbone, surpassing all other models.
This suggests that the primary challenge of SAM2 in CIS is locating objects, but once the position is identified, it can produce precise segmentation.
Qualitative Results.
For qualitative analysis of the CIS task, as shown in fig:CIS, SAM-Auto can partially segment certain lightly concealed targets such as fish and giraffes, whereas SAM2-Auto has difficulty detecting camouflages.
However, when provided with bounding box prompts, both SAM and SAM2 effectively segment camouflaged instances. Notably, SAM2 excels in capturing fine details, showcasing its strength in producing intricate features.
§.§.§ Shadow Instance Segmentation
Quantitative Results.
It is important to note that in shadow instance detection tasks, the matching degree between the shadow and the object needs to be measured. However, SAM2 lacks this functionality, so our comparison does not involve measuring this aspect.
In our experiments, we treat instances and shadows as separate entities, rather than pairs of instances and corresponding shadows.
Based on the table:shadow, SAM2 models perform exceptionally well in the GT-Bbox setting, with SAM2-T achieving AP scores of 51.9 on the SOBA-challenge and 58.9 on the SOBA-test, surpassing all SAM models and task-specific approaches.
However, an interesting phenomenon is observed: using different backbones with SAM2 does not lead to significant performance differences.
In fact, SAM2 with larger backbones has the potential to decreases segmentation performance, and the same phenomenon exists for SAM models.
Switching to automatic mode results in a significant drop in performance for both SAM2 and SAM, but the changes in backbone parameter size do not greatly impact segmentation results.
Therefore, to improve the performance of SAM2 on the SID task, it is not appropriate to simply increase the depth and parameter size of the model.
Qualitative Results.
As shown in fig:shadow, both SAM and SAM2 are effective in segmenting instances in automatic mode, but face challenges in accurately identifying shadows.
This may be caused by the lack of instance-shadow IoU matching operations in SAM2.
When provided with box prompts, both models show significant improvement in shadow segmentation, with SAM2 having a slight edge in capturing shadow details.
Despite these improvements, the overall quality of shadow segmentation by SAM2 still falls short compared to corresponding instances.
§.§.§ Dichotomous Image Segmentation
Dichotomous image segmentation focuses on identifying class-agnostic foreground objects in natural scenes. In automatic prediction mode, SAM generates multiple binary masks for each sample. To select the most suitable foreground mask, we use a maximum Intersection over Union (IoU) strategy, choosing the mask with the highest IoU score.
Quantitative Results.
table:DIS1 and table:DIS2 present the qualitative comparison results of SAM and SAM2 against task-specific methods. In the automatic setting, SAM2 models, particularly SAM2-T, show marked improvement over SAM models.
Concretely, SAM2-T achieves an F_β^max of 0.306 on DIS-VD, compared to 0.215 for SAM-B. SAM2-B+ and SAM2-L further enhance performance, with SAM2-B+ reaching 0.428 in F_β^max on DIS-VD, surpassing all SAM variants.
These improvements indicate better segmentation quality and alignment, as evidenced by higher S_α and E_m values. In the GT-Bbox setting, SAM2 models demonstrate significant gains across all metrics, except for HCE. For instance, SAM2-B+ achieves F_β^max of 0.765 on DIS-VD, surpassing SAM-L's 0.739, although the HCE metric increases by 100 points.
Generally, the HCE metric is more sensitive to the structural refinement of the segmentation map compared to traditional accuracy metrics like weighted F-measure, mean absolute error, and mean enhanced alignment measure.
It indicates that SAM2 enhances the overall perception of the target, but it still struggles to identify the dominant area while accurately segmenting detailed object structures.
Overall, SAM2 models offer substantial improvements over SAM across the vast majority of metrics in all datasets, nearly approaching the performance of the fully supervised method IS-Net.
However, the HCE scores illustrate that both SAM and SAM2 have limited potential in representing detailed structures.
Qualitative Results.
Further analysis of the qualitative results, particularly in fig:DIS, reveals that both SAM and SAM2 encounter difficulties in identifying foreground objects, whether in automatic mode or with box prompts.
Notably, when receiving a bounding box prompt, SAM can roughly outline the main body of objects, while SAM2 enhances this capability by improving segmentation completeness.
For example, SAM can segment the body of a ship with the aid of a bounding box prompt (see the seventh column of fig:DIS), but it tends to miss smaller details such as the mast and thin lines.
Thus, although SAM2 improves the accuracy for locating foreground objects in natural scenes, it still falls short in accurately capturing the full extent of dominant areas of targets and rendering intricate structural details.
§ DISCUSSION
We conducted extensive quantitative and qualitative evaluations of SAM2 on various class-agnostic instance-level segmentation benchmarks.
In CIS, SIS, and SID tasks, SAM2 outperforms task-specific methods in the GT-Bbox mode.
For the relatively straightforward SIS task, SAM2 attains an impressive AP score of 93.4 on SIP test set.
For the more challenging CIS task, SAM2 reaches a significant AP score of 73.5 on the NC4K test set, far exceeding the performance of specific methods.
In the SID task, SAM2 segments instances effectively but struggle with shadow matching.
As observed in the qualitative comparison, SAM2 without the GT-Bbox is almost impossible to segment out shadows.
In the DIS task, SAM2 performs poorly, even in settings with box prompts it is not able to perform granular segmentation for complex structured objects.
Overall, SAM2 with GT-Bbox achieves better results for both salient and camouflaged objects, especially for salient instances. However, its ability to handle very delicate objects requires improvement.
In comparison with SAM, we found that SAM2 falls short of the performance of SAM in automatic mode across the CIS, SIS, and SID tasks.
Interestingly, in GT-Bbox mode, SAM2 significantly outperforms SAM.
Besides, we observe that SAM2 models with larger backbones does not always enhance the performance, and even degrades it, which is especially noticeable in the automatic mode.
These findings provide valuable insights for future applications of SAM2 in instance-level segmentation tasks.
§ CONCLUSION
In this study, we evaluate the zero-shot performance of SAM2 in class-agnostic instance-level segmentation tasks across four scenarios: Salient Instance Segmentation (SIS), Camouflaged Instance Segmentation (CIS), Shadow Instance Detection (SID), and Dichotomous Image Segmentation (DIS).
In the automatic setting, SAM2 underperforms compared to SAM and task-specific methods in the SIS, CIS, and SID tasks.
When provided with bounding box prompts, especially in the DIS task, SAM2 demonstrates its capability to generate more refined masks.
The experimental results demonstrate that SAM2 excels in class-agnostic instance-level segmentation when guided by prompts, as well as its potential capabilities in diverse scenarios.
In future work, we aim to fine-tune SAM2 and develop adapters to boost its performance across various instance-level segmentation tasks.
IEEEtran
|
http://arxiv.org/abs/2409.02095v1 | 20240903175203 | DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos | [
"Wenbo Hu",
"Xiangjun Gao",
"Xiaoyu Li",
"Sijie Zhao",
"Xiaodong Cun",
"Yong Zhang",
"Long Quan",
"Ying Shan"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.GR"
] |
[
Target Detection in Sea Clutter with Application to Spaceborne SAR Imaging
Shahrokh Hamidi
Department of Electrical and Computer Engineering
University of Waterloo
Waterloo, Ontario, Canada
[email protected]
September 9, 2024
=========================================================================================================================================================
type=figure
< g r a p h i c s >
figure
We innovate DepthCrafter, a novel video depth estimation approach, by leveraging video diffusion models.
It can generate temporally consistent long depth sequences with fine-grained details for open-world videos, without the need for additional information such as camera poses or optical flow.
]
[1] Joint first authors.
[2] Corresponding authors.
§ ABSTRACT
Despite significant advancements in monocular depth estimation for static images, estimating video depth in the open world remains challenging, since open-world videos are extremely diverse in content, motion, camera movement, and length.
We present DepthCrafter, an innovative method for generating temporally consistent long depth sequences with intricate details for open-world videos, without requiring any supplementary information such as camera poses or optical flow.
DepthCrafter achieves generalization ability to open-world videos by training a video-to-depth model from a pre-trained image-to-video diffusion model, through our meticulously designed three-stage training strategy with the compiled paired video-depth datasets.
Our training approach enables the model to generate depth sequences with variable lengths at one time, up to 110 frames, and harvest both precise depth details and rich content diversity from realistic and synthetic datasets.
We also propose an inference strategy that processes extremely long videos through segment-wise estimation and seamless stitching.
Comprehensive evaluations on multiple datasets reveal that DepthCrafter achieves state-of-the-art performance in open-world video depth estimation under zero-shot settings.
Furthermore, DepthCrafter facilitates various downstream applications, including depth-based visual effects and conditional video generation.
§ INTRODUCTION
Depth estimation from monocular images or videos, serving as the bridge linking 2D observations and the 3D world, has been a long-standing fundamental problem in computer vision.
It plays a crucial role in a wide range of downstream applications, such as mixed reality, AI-generated content, autonomous driving, and robotics <cit.>.
The inherent ambiguity makes it extremely challenging, as the observed information from a single view is insufficient to determine the depth of a scene uniquely.
With recent advances in foundation models, we have witnessed significant progress in depth estimation from monocular images <cit.>.
However, all these methods are tailored for static images, without considering the temporal information in videos.
Temporal inconsistency, or flickering, would be observed when directly applying them to videos, as shown in Fig. <ref>.
Native video depth estimation methods <cit.> typically optimize a temporally consistent depth sequence in 3D space from a pre-trained image depth model, with a given or learnable calibrated camera poses.
Their performance is sensitive to both the proportion of dynamic content and the quality of the camera poses.
Yet, videos in the open world are diverse in content, motion, camera movement, and length, making these methods hard to perform well in practice.
Moreover, the required camera poses are often non-trivial to obtain in open-world videos, particularly for long videos and videos with abundant dynamic content.
In this paper, we aim to generate temporally consistent long depth sequences with high-fidelity details for open-world videos, without requiring any additional information, , camera poses, optical flow, .
Observing the strong capability of diffusion models in generating various types of videos <cit.>, we propose a novel approach, named DepthCrafter, to leverage the video diffusion model for video depth estimation, while maintaining the generalization ability to open-world videos.
We train our DepthCrafter, a video-to-depth model, from a pre-trained image-to-video diffusion model, using our compiled paired video-depth datasets, which are in two styles, realistic and synthetic, where the realistic dataset provides rich content diversity and the synthetic dataset offers precise depth details.
On the aspect of temporal context, existing video diffusion models can only produce a fixed and small number of frames at a time, , 25 frames in SVD <cit.>.
However, this is often too short for open-world video depth estimation to accurately arrange depth distributions throughout the video.
Considering both the respective advantages of the two-styled datasets and the requirement of variable long temporal context, we present a three-stage training strategy to progressively train certain layers of the diffusion model on different datasets with variable lengths.
By doing so, we can adapt the diffusion model to generate depth sequences with variable lengths at one time, up to 110 frames, and harvest both the precise depth details and rich content diversity.
To further enable estimating depth sequences for extremely long videos in the open world, we design an inference strategy to process the video in overlapped segments and seamlessly stitch them together.
We extensively evaluate our DepthCrafter on multiple datasets under zero-shot settings.
Both qualitative and quantitative results demonstrate that our DepthCrafter achieves state-of-the-art performance in open-world video depth estimation, outperforming existing methods by a large margin.
Besides, we demonstrate that our DepthCrafter facilitates various downstream applications, including depth-based visual effects and conditional video generation.
Our contributions can be summarized below:
* We innovate DepthCrafter, a novel method to generate temporally consistent long depth sequences with fine-grained details for open-world videos, outperforming existing approaches by a large margin.
* We present a three-stage training strategy to enable generating depth sequences with a long and variable temporal context, up to 110 frames. It also allows us to harvest both the precise depth details and rich content diversity from synthetic and realistic datasets.
* We design an inference strategy to segment-wisely process videos beyond 110 frames and seamlessly stitch them together, enabling depth estimation for extremely long videos.
§ RELATED WORK
Monocular depth estimation.
Deep neural networks have dominated monocular depth estimation <cit.> for their superior performance.
Nevertheless, the generalization ability to diverse open-world scenes is challenging due to the limited training data.
To this end, MiDaS <cit.> presents an affine-invariant loss for training on mixed datasets.
Depth-Anything (V2) <cit.> followed this idea and proposed to train the model on both labeled and large-scale unlabeled images, achieving good generalization ability.
Marigold <cit.> and GeoWizard <cit.> leverage the diffusion priors to realize zero-shot transfer to unseen datasets.
Besides, a stream of methods focus on estimating metric depth, such as ZoeDepth <cit.>, UniDepth <cit.>, and Metric3D <cit.>.
However, all these methods are tailored for static images, while our work aims to estimate temporally consistent depth sequences from open-world videos.
Video depth estimation.
Compared to single-image depth estimation, video depth additionally requires temporal consistency.
Existing methods could be categorized into two classes: test-time optimization and feed-forward prediction.
Test-time optimization methods <cit.> involve an optimization procedure for each video during inference, typically requiring camera poses or optical flow.
This type of method usually can produce consistent video depth, but the required camera poses may limit their applicability to open-world videos.
Feed-forward prediction methods directly predict depth sequences from videos <cit.>, , DeepV2D <cit.> combines camera motion estimation with depth estimation, MAMO <cit.> leverages memory attention, and NVDS <cit.> introduces a plug-and-play stabilization network.
However, due to the limited training data and model capacity, these methods often fail to address the in-the-wild videos with diverse content.
By leveraging video diffusion priors and our designed three-stage training strategy, our method demonstrates the ability to perform open-world video depth estimation.
Video diffusion models.
Diffusion models <cit.> have achieved high-fidelity image generation results from text descriptions benefiting from web-scale aligned image-text datasets.
Consequently, these models have been extended to generating videos of various types from text or images <cit.>.
Among these methods, VDM <cit.> presents the first results on video generation using diffusion models, while Sora <cit.> has shown remarkable results in this area.
SVD <cit.> provides the popular open-source pre-trained models for image-to-video.
Trained on a well-curated video dataset, SVD can generate high-quality videos and is used as the model prior for various video-related tasks.
In this paper, we leverage the video diffusion model for high-fidelity consistent video depth estimation by taking the input video as the condition.
Concurrent to our work, ChronoDepth <cit.> also explores video depth estimation with video diffusion priors.
However, ChronoDepth only supports a short temporal context, 10 frames, which is insufficient to accurately arrange depth distributions throughout the video.
In contrast, our method not only supports variable-length temporal context, up to 110 frames, but also can estimate depth sequences for extremely long videos.
§ METHOD
Given an open-world video, 𝐯∈ℝ^T × H × W × 3, our goal is to estimate temporally consistent depth sequences, 𝐝∈ℝ^T × H × W, with fine-grained details.
Considering the diversity of open-world videos in content, motion, camera movement, and length, the challenges to achieving our goal are threefold:
1.) a comprehensive understanding of video content for generalization ability;
2.) a long and variable temporal context to arrange the entire depth distributions accurately and keep temporal consistency;
and 3.) the ability to process extremely long videos.
As shown in Fig. <ref>, we tackle these challenges by formulating the video depth estimation as a conditional diffusion generation problem to model the conditional distribution p(𝐝 | 𝐯), training a video-to-depth model from a pre-trained image-to-video diffusion model through a meticulously designed three-stage training strategy with compiled paired video-depth datasets, and crafting an inference strategy to process extremely long videos through segment-wise estimation and seamless stitching.
§.§ Preliminaries of Video Diffusion Models
Diffusion models <cit.> learn the data distribution p(𝐱) by a forward diffusion process to gradually noise the data to a target distribution, the Gaussian distribution, and a reverse denoising process to iteratively recover the data from the noise by a learned denoiser.
In this paper, our study is conducted based on the stale video diffusion (SVD) <cit.>, which is a famous open-source video diffusion model.
SVD adopts the EDM-framework <cit.> for the noise schedule and denoising process.
The diffusion process is achieved by adding σ_t^2-variance Gaussian noise to the data 𝐱_0 ∼ p(𝐱):
𝐱_t = 𝐱_0 + σ_t^2 ϵ, ϵ∼𝒩(0, 𝐈),
where 𝐱_t ∼ p(𝐱;σ_t) is the data with noise level σ_t.
When σ_t is large enough (σ_max),
the distribution would be indistinguishable from the Gaussian distribution.
Based on this fact, the diffusion model starts from a high-variance Gaussian noise ϵ∼𝒩(0, σ_max^2𝐈) and gradually denoises it towards σ_0 = 0 to generate the data.
The denoiser D_θ is a learnable function that tries to predict the clean data, 𝐱̃_0 = D_θ(𝐱_t; σ_t ).
Its training objective is the denoising score matching:
𝔼_𝐱_t ∼ p(𝐱;σ_t), σ_t ∼ p(σ)
[λ_σ_t‖ D_θ(𝐱_t; σ_t; 𝐜) - 𝐱_0 ‖_2^2 ],
where p(σ) is the noise level distribution during training, 𝐜 denotes the conditioning information, and λ_σ_t is the weight for the denoising loss at time t.
To promote the learning, EDM adopts the preconditioning strategy <cit.>, to parameterize the denoiser D_θ as:
D_θ (𝐱_t; σ_t; 𝐜) =
c_skip(σ_t) 𝐱_t +
c_out(σ_t)
F_θ(
c_in𝐱_t; c_noise(σ_t); 𝐜),
where F_θ is implemented as a learnable U-Net <cit.>, and c_in, c_out, c_skip, and c_noise are preconditioning functions.
§.§ Formulation with Diffusion Models
Latent space transformation.
To generate high-resolution depth sequences without sacrificing computational efficiency, we adopt the framework of Latent Diffusion Models (LDMs) <cit.> that perform in a low-dimensional latent space, rather than the original data space.
The transformation between the latent and data spaces is achieved by a Variational Autoencoder (VAE) <cit.>, which was originally designed for encoding and decoding video frames in SVD <cit.>.
Fortunately, we found it can be directly used for depth sequences with only a negligible reconstruction error, which is similar to the observation in Marigold <cit.> for image depth estimation.
As shown in Fig. <ref>, the latent space transformation is formulated as:
𝐳^(𝐱) = ℰ(𝐱), 𝐱̂ = 𝒟(𝐳^(𝐱)),
where 𝐱 is either the video 𝐯 or the depth sequence 𝐝, 𝐳^(𝐱) is the latent representation of the data, 𝐱̂ is the reconstructed data, ℰ and 𝒟 are encoder and decoder of the VAE, respectively.
For the depth sequence, we replicate it three times to meet the 3-channel input format of the encoder in VAE and average the three channels of the decoder output to obtain the final latent of the depth sequence.
Following the practice in image depth estimation <cit.>, we adopt the relative depth, the affine-invariant depth, which is normalized to [0, 1].
But differently, our predicted depth sequence shares the same scale and shift across frames, rather than a per-frame normalization, which is crucial for maintaining temporal consistency.
Conditioning on the video.
SVD is an image-to-video diffusion model that generates videos conditioned on a single image.
The conditional image is fed into the U-Net in two ways, , concatenating its latent to the input latent, and injecting its CLIP <cit.> embedding to the intermediate features via cross-attention.
Yet, our DepthCrafter involves the generation of depth sequences conditioned on video frames in a frame-to-frame fashion.
Therefore, we adapt the conditioning mechanism in SVD to meet our video-to-depth generation task.
As shown in Fig. <ref>, given the encoded latent of depth sequence 𝐳^(𝐝) and video frames 𝐳^(𝐯) from Eq. (<ref>), we concatenate the video latent to the input noisy depth latent frame-wisely, rather than only the first frame, to condition the denoiser for generating the depth sequence.
For high-level semantic information, we embed the video frames using CLIP and then inject the embeddings in a frame-to-frame manner to the denoiser via cross-attention.
Compared to the original conditioning mechanism, our adapted conditioning provides more comprehensive information from the video frames to the denoiser, which significantly improves the alignment between the generated depth sequences and the video content, as well as the temporal consistency.
§.§ Training
To train our DepthCrafter, we need a large amount of high-quality paired video-depth sequences.
Although there are several video depth datasets available, , KITTI <cit.>, Scannet <cit.>, VDW <cit.>, DynamicReplica <cit.>, and MatrixCity <cit.>, they are either lacking high-quality depth annotations or restricted to a specific domain, , driving scenes, indoor scenes, or synthetic scenes.
Dataset construction.
To this end, we compiled paired datasets of two styles, realistic and synthetic, where the realistic dataset is large-scale and diverse, while the synthetic dataset is miniature but fine-grained and accurate.
The realistic dataset is constructed from a large number of binocular videos with a wide range of scene and motion diversity.
We cut the videos according to scene changes, and apply the state-of-the-art video stereo matching method, , BiDAStereo <cit.>, to generate temporally consistent depth sequences.
Finally, we obtained ∼200K paired video-depth sequences with the length of 50-200 frames.
The synthetic dataset is a combination of the DynamicReplica <cit.> and MatrixCity <cit.> datasets, which contains ∼3K fine-grained depth annotations with a length of 150 frames.
Challenges of variable long temporal context.
Different from image depth estimation which can determine the distribution of relative depth from a single frame, the video depth estimation requires a long temporal context to arrange the depth distributions accurately for the entire video and keep the temporal consistency.
Besides, the model should support variable-length estimation as the length of open-world videos may vary significantly.
However, existing open-source video diffusion models can only generate a fixed small number of frames at a time, , 25 frames in SVD <cit.>.
It is non-trivial to adapt the pre-trained model to meet this requirement, as directly fine-tuning it with long sequences is memory-consuming, for example, a modern GPU with 40GB memory can only support the training of a 25-frame sequence in SVD.
Three-stage training.
Considering both the two-style paired datasets and the long temporal context requirement, we design a three-stage training strategy to harvest the variety of video content, the precise depth details, as well as the support for long and variable sequences.
As shown in Fig. <ref>, we train our DepthCrafter from the pre-trained SVD in three stages.
We first train it on our large realistic dataset to adapt the model to the video-to-depth generation task.
The sequence length in this stage is randomly sampled from [1, 25] frames, such that the model can learn to generate depth sequences with variable lengths.
In the second stage, we only fine-tune the temporal layers of the model still on our large realistic dataset, with the sequence length randomly sampled from [1, 110] frames.
The reason why we only fine-tune the temporal layers is that the temporal layers are more sensitive to the sequence length while the spatial layers are already adapted to the video-to-depth generation task in the first stage, and doing so significantly reduces memory consumption compared to fine-tuning the full model.
The long temporal context in this stage enables the model to precisely arrange the entire depth distributions for long and variable sequences.
In the third stage, we fine-tune the spatial layers of the model on our small synthetic dataset, with a fixed sequence length of 45 frames since the model has already learned to generate depth sequences with variable lengths in the first two stages and tuning the spatial layers would not affect the temporal context.
As the depth annotations in the synthetic dataset are more accurate and fine-grained, the model can learn more precise depth details in this stage.
The three-stage training strategy makes our DepthCrafter capable of generating high-quality depth sequences for open-world videos with variable lengths.
§.§ Inference for Extremely Long Videos
Although the model can estimate depth sequences up to the length of 110 frames after training, it is still far from long enough for open-world videos, which can even contain hundreds or thousands of frames.
To this end, we design an inference strategy to infer extremely long depth sequences in a segment-wise manner and seamlessly stitch them together to form the entire depth sequence.
As shown in Fig. <ref>, we first divide the video into overlapped segments, whose lengths are up to 110 frames.
Then we estimate the depth sequences for each segment.
Rather than purely initializing the input latent with Gaussian noise ϵ∼𝒩(0, σ_max^2𝐈), we initialize the latent of the overlapped frames by adding noise to the denoised latent from the previous segment, to anchor the scale and shift of the depth distributions.
Finally, to further ensure the temporal smoothness across segments, we craft a mortise-and-tenon style latent interpolation strategy to stitch consecutive segments together, inspired by <cit.>.
Specifically, we interpolate the latent of the overlapped frames o_i from the two segments with the interpolation weights w_i and 1-w_i, respectively, where w_i is linearly decreased from 1 to 0.
The final estimated depth sequence is obtained by decoding the stitched latent segments with the decoder 𝒟 in the VAE.
With the training and inference strategies, our DepthCrafter can generate temporally consistent long depth sequences for open-world videos.
§ EXPERIMENTS
§.§ Implementation
We implemented our DepthCrafter based on SVD <cit.>, using the diffusers <cit.> library.
We train our model at the resolution of 320 × 640 for efficiency, but we can estimate depth sequences at any resolution, , 576 × 1024, during inference.
We use the Adam optimizer <cit.> with a learning rate of 1×10^-5 and a batch size of 8.
The number of iterations in the three stages of training is 80K, 40K, and 10K, respectively.
We employed eight NVIDIA A100 GPUs for training, with a total training time of about five days.
We also adopt the classifier-free guidance <cit.> to improve the details of the generated depth sequences.
The number of denoising steps is set to 25 for all experiments.
§.§ Evaluation
Evaluation datasets.
We evaluate our model on four video datasets, a single-image dataset, as well as the DAVIS dataset <cit.> and in-the-wild videos for qualitative results.
All the evaluation videos were not included in our training process.
Sintel <cit.> is a synthetic dataset with precise depth labels, featuring dynamic scenes with diverse content and camera motion.
It contains 23 sequences with the length of around 50 frames each in the training set.
ScanNet v2 <cit.> is an indoor dataset with depth maps obtained from a Kinect sensor.
For evaluation purposes, we employed the test set, which includes 100 RGB-D video sequences of various scenes.
We extracted 90 frames from each sequence at a rate of 15 frames per second.
Since ScanNet v2 contains only static indoor scenes, we further introduced 5 dynamic indoor RGB-D videos with a length of 110 frames each from the Bonn <cit.> dataset to better evaluate the performance of our model on dynamic scenes.
KITTI <cit.> is a street-scene outdoor dataset for autonomous driving, with sparse metric depths captured by a LiDAR sensor.
We adopted the validation set, which includes 13 scenes, and extracted 13 videos from it with a length of 110 frames each.
Besides, we also evaluated our model for single-image depth estimation on the NYU-v2 <cit.> dataset, which contains 654 images in the test split.
These datasets cover a wide range of scenes, including synthetic and realistic scenes, indoor and outdoor scenes, and static and dynamic scenes, to evaluate the generalization ability of our model across various open-world scenarios.
Evaluation metrics.
Following conventional practice in relative depth estimation <cit.>, we align the estimated depth maps with the ground truth using a scale and shift before calculating the metrics.
Different from previous methods that optimize the scale and shift individually for each frame, we optimize a shared scale and shift across the entire video, which is more challenging but necessary for video depth estimation to ensure temporal consistency.
We calculate two metrics: AbsRel ↓ (absolute relative error: |𝐝̂-𝐝| / 𝐝) and δ_1↑ (percentage of max(𝐝/𝐝̂, 𝐝̂/𝐝) < 1.25), which are widely used in the literature <cit.>.
Quantitative results.
We compare our DepthCrafter with the representative methods for both single-image and video depth estimation, Marigold <cit.>, Depth-Anything <cit.>, Depth-Anything-V2 <cit.>, NVDS <cit.>, and ChronoDepth <cit.>.
As shown in Tab. <ref>, our DepthCrafter achieves state-of-the-art performance in all four video datasets, thanks to the powerful open-world video undederstanding capability of the video diffusion models and the three-stage training strategy that leverages both realistic and synthetic datasets.
For Sintel and KITTI, characterized by significant camera motion and fast-moving objects, our DepthCrafter outperforms the previous strongest Depth-Anything (V2) model tremendously in terms of both the AbsRel and δ_1 metrics, (0.697-0.564)/0.564=23.6% improvement in δ_1 on Sintel.
For indoor datasets like Scannet and Bonn, featuring minimal camera motion and roughly the same room scales, Depth-Anything has exhibited strong performance.
Nevertheless, we still have some performance enhancements over Depth-Anything, (0.130-0.125)/0.130=3.8% improvement in AbsRel on Scannet.
Note that the sequence length of these datasets varies from 50 to 110 frames, and our model can generalize well across different video lengths.
Qualitative results.
To further demonstrate the effectiveness of our model, we present the qualitative results on video depth estimation from the DAVIS dataset <cit.>, Sora generated videos <cit.>, and open-world videos, including human actions, animals, architectures, cartoons, and games, where the sequence length varies from 90 to 195 frames.
As shown in Fig. <ref>, we show the temporal profiles of the estimated depth sequences in the red line position by slicing the depth values along the time axis, to better visualize the temporal consistency of the estimated depth sequences, following the practice in <cit.>.
We can observe that our DepthCrafter can produce temporally consistent depth sequences with fine-grained details across various open-world videos, while both NVDS and Depth-Anything exhibit zigzag artifacts in the temporal profiles, indicating the flickering artifacts in the estimated depth sequences.
These results demonstrate the effectiveness of our DepthCrafter in generating temporally consistent long depth sequences with high-fidelity details for open-world videos.
Single-image depth estimation.
Although our model is designed for video depth estimation, it can also perform single-image depth estimation, as our DepthCrafter can estimate video depth of any length.
As shown in Tab. <ref>, our DepthCrafter achieves competitive performance in single-image depth estimation on the NYU-v2 dataset.
Since the depth labels in the NYU-v2 dataset are sparse and noisy, we also provide the qualitative results in Fig. <ref> to demonstrate the effectiveness of our model in estimating depth maps from static images.
We can observe that our DepthCrafter can even produce more detailed depth maps than Depth-Anything-V2, which is the existing state-of-the-art single-image depth estimation model.
These results demonstrate the ability of our DepthCrafter for processing both video and single-image depth estimation tasks.
§.§ Ablation Studies
Effectiveness of the three-stage training strategy.
We first ablate the effectiveness of the three-stage training strategy by evaluating the performance of our model at the end of each stage on the Sintel dataset <cit.>, since it contains precise depth annotations on dynamic scenes.
From Tab. <ref>, we can observe that the performance of our model almost improves as the training progresses, indicating the effectiveness of the three-stage training strategy.
Although the AbsRel metric slightly increases in stage 2, the δ_1 metric consistently improves, and stage 2 is essential for supporting the long temporal context up to 110 frames.
Effectiveness of the inference strategy.
To ablate the effectiveness of our inference strategy components, we consider these variants:
baseline, which independently infers each segment and directly averages the overlapped frames;
+ initialization, which contains the same initialization of overlapped latents as our method, but without the stitching process;
+ initialization & stitching, which is our full method.
We visually compare the temporal profiles of the estimated depth sequences of these variants in Fig. <ref>.
We can observe the overlapped jaggies in both the static regions (pointed by the yellow arrow) and the dynamic regions (pointed by the green arrow) in temporal profiles of the “baseline” method, which indicates the flickering artifacts.
The “+ initialization” method can alleviate the flickering artifacts in the static regions, but still has jaggies in the dynamic regions, while our full method can produce smooth depth sequences in both static and dynamic regions.
§.§ Applications
Our DepthCrafter can facilitate various downstream applications, , foreground matting, depth slicing, fog effects, and depth-conditioned video generation, by providing temporally consistent depth sequences with fine-grained details for open-world videos.
We show example results of fog effects and depth-conditioned video generation in Fig. <ref>, while more visual effects results are available in our website.
For the fog effect, we blend the fog map with the input video frames based on the depth values to simulate varying transparency levels in fog.
And many recent conditioned video generation models <cit.> employ depth maps as the structure conditions for video generation or editing.
We adopt Control-A-Video <cit.> and video depth of our method as conditions to generate a video with prompts “a rider walking through stars, artstation”.
The visual effects of these applications rely heavily on the accuracy and consistency of the video depth, which demonstrates the wide applicability of our DepthCrafter in various downstream tasks.
§ CONCLUSION
We present DepthCrafter, a novel method for open-world video depth estimation by leveraging video diffusion models.
It can generate temporally consistent depth sequences with fine-grained details for video width diverse content, motion, and camera movement, without requiring any additional information.
It also supports videos of variable lengths, ranging from one frame (static image) to extremely long videos.
This is achieved through our meticulously designed three-stage training strategy, compiled paired video-depth datasets, and an inference strategy.
Extensive evaluations have demonstrated that DepthCrafter achieves state-of-the-art performance in open-world video depth estimation under zero-shot settings.
It also facilitates various downstream applications, including depth-based visual effects and conditional video generation.
There are still some limitations to be addressed in the future, such as the expensive computation and memory cost, which is due to the large model size and the iterative denoising process in the diffusion model.
ieeenat_fullname
|
http://arxiv.org/abs/2409.03532v1 | 20240905134517 | The Moore-Tachikawa conjecture via shifted symplectic geometry | [
"Peter Crooks",
"Maxence Mayrand"
] | math.SG | [
"math.SG",
"math.AG",
"math.RT"
] |
The Moore–Tachikawa conjecture via shifted symplectic geometry]The Moore–Tachikawa conjecture via
shifted symplectic geometry
Peter Crooks]Peter Crooks
Maxence Mayrand]Maxence Mayrand
[Peter Crooks]Department of Mathematics and Statistics
Utah State University
3900 Old Main Hill
Logan, UT 84322, USA
[email protected]
[Maxence Mayrand]Département de mathématiques
Université de Sherbrooke
2500 Bd de l’Université
Sherbrooke, QC, J1K 2R1, Canada
[email protected]
53D17 (primary); 14L30, 57K16 (secondary)
§ ABSTRACT
We use shifted symplectic geometry to construct the Moore–Tachikawa topological quantum field theories (TQFTs) in a category of Hamiltonian schemes. Our new and overarching insight is an algebraic explanation for the existence of these TQFTs, i.e. that their structure comes naturally from three ingredients: Morita equivalence, as well as multiplication and identity bisections in abelian symplectic groupoids. Using this insight, we generalize the Moore–Tachikawa TQFTs in two directions.
The first generalization concerns a 1-shifted version of the Weinstein symplectic category . Each abelianizable quasi-symplectic groupoid is shown to determine a canonical 2-dimensional TQFT η_:_2⟶. We recover the open Moore–Tachikawa TQFT and its multiplicative counterpart as special cases.
Our second generalization is an affinization process for TQFTs. We first enlarge Moore and Tachikawa's category of holomorphic symplectic varieties with Hamiltonian actions to , a category of affine Poisson schemes with Hamiltonian actions of affine symplectic groupoids.
We then show that if X is an affine symplectic groupoid that is abelianizable when restricted to an open subset U X statisfying Hartogs' theorem, then determines a TQFT η_ : _2.
In more detail, we first devise an affinization process sending 1-shifted Lagrangian correspondences in to Hamiltonian Poisson schemes in .
The TQFT is obtained by composing this affinization process with the TQFT η_|_U : _2 of the previous paragraph.
Our results are also shown to yield new TQFTs outside of the Moore–Tachikawa setting.
[
[
=====
§ INTRODUCTION
§.§ Context
The Moore–Tachikawa conjecture <cit.> inspires important, topical, and ongoing research at the interface of geometric representation theory, Lie theory, low-dimensional topology, holomorphic symplectic geometry, and theoretical physics. It arises in a string-theoretic context, from a conjectural class of 6-dimensional superconformal quantum field theories (SQFTs). One may associate such a class to each connected complex simple Lie group G. The Higgs branch of this conjectural class would be a 2-dimensional topological quantum field theory (TQFT) η_G:_2⟶, i.e. a symmetric monoidal functor, valued in a category of holomorphic symplectic varieties with Hamiltonian actions. The objects of are complex semisimple groups, and a morphism from G_1 to G_2 is a holomorphic symplectic variety with a Hamiltonian action of G_1 × G_2. The composition of M ∈(G_1, G_2) and N ∈(G_2, G_3) is the Hamiltonian reduction (M × N) G_2. Properties of this TQFT would necessarily include η_G(S^1)=G and η_G([
baseline=-2.5pt,
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius= 2pt,
tqft/boundary separation=0.6cm,
tqft/view from=incoming,
]
[
tqft/cup,
name=d,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
cobordism height= 1cm,
];
)=G×𝒮, where 𝒮 is a Kostant slice in the Lie algebra of G. Moore and Tachikawa's conjecture is that there exists a 2-dimensional TQFT _2⟶ satisfying these two conditions. While formulatable as a problem in pure mathematics, a resolution of this conjecture would clearly have implications for string theory. An affirmative answer would be some evidence that the aforementioned SQFTs actually exist. We refer the reader to Tachikawa's ICM paper <cit.> for further context.
The Moore–Tachikawa conjecture was formulated less than 15 years prior to our completing this manuscript. In this short time, it has witnessed progress and been found to make deep connections to adjacent subjects <cit.>. Progress on the conjecture itself is made in unpublished work of Ginzburg–Kazhdan <cit.>. These authors first construct the open Moore–Tachikawa varieties, thereby proving a relative of the Moore–Tachikawa conjecture. If the affinizations of such varieties were known to be of finite type, then the Moore–Tachikawa conjecture would follow. The work of Braverman–Finkelberg–Nakajima <cit.> implies that the relevant affine schemes are of finite type for G of Lie type A. In particular, the Moore–Tachikawa conjecture holds in type A.
Ginzburg and Kazhdan's approach to the Moore–Tachikawa conjecture involves a kind of Hamiltonian reduction by abelian group schemes. This is formalized and generalized in <cit.>, where we introduce Hamiltonian reduction by symplectic groupoids along pre-Poisson subvarieties. We thereby recover the open Moore–Tachikawa varieties constructed by Ginzburg–Kazhdan, as well as their affinizations. In very rough terms, this is achieved by replacing G with its cotangent groupoid T^*G^*. One might therefore suspect the following: our recovering the Ginzburg–Kazhdan construction is a shadow of a more general “affinization process", in which the results of <cit.> are used to affinize TQFTs constructed via symplectic groupoids. This is the first of two main themes in our manuscript.
An appealing feature of the Moore–Tachikawa conjecture is its apparent amenability to a wide variety of techniques, including some not used by Ginzburg–Kazhdan or Braverman–Finkelberg–Nakajima. It is in this context that we are led to shifted symplectic geometry, as introduced by Pantev–Töen–Vaquie–Vezzosi <cit.>. In subsequent work <cit.>, Calaque suggests that shifted symplectic geometry should facilitate a rigorous construction of the Moore–Tachikawa TQFT. This is largely consistent with a result in <cit.>, where Bălibanu and the second named author develop a Hamiltonian reduction theory for 1-shifted symplectic groupoids (a.k.a. quasi-symplectic groupoids). One consequence is a multiplicative counterpart of the Ginzburg–Kazhdan construction described above. With this in mind, the second main theme of our manuscript is the relevance of shifted symplectic geometry to the Moore–Tachikawa conjecture.
§.§ Overarching principle
The overarching principle of this manuscript is that the structure underlying certain TQFTs is encoded in multiplication and identity bisections in abelian symplectic groupoids. We thereby offer an algebraic explanation for the existence of the Moore–Tachikawa TQFT in this enlarged category, as well as that of other TQFTs. These ideas are made somewhat more precise in the next few subsections, and completely precise in the main body of the manuscript.
§.§ TQFTs in a 1-shifted Weinstein symplectic category
For the moment, we may work in the smooth or holomorphic categories.
A first useful idea, suggested by Calaque <cit.>, is to temporarily replace the Moore–Tachikawa category of holomorphic symplectic varieties with a 1-shifted version of Weinstein symplectic category <cit.>; see Section <ref>. The objects of are quasi-symplectic groupoids <cit.>, interpreted as presentations of 1-shifted symplectic stacks.
We then recall that a 2-dimensional TQFT in a symmetric monoidal category 𝐂 is equivalent to a commutative Frobenius object in 𝐂.
We subsequently establish that any abelian symplectic groupoid gives rise to a commutative Frobenius object in .
The multiplication and unit of the aforementioned commutative Frobenius object are essentially given by groupoid multiplication and the identity section, respectively.
In the interest of associating TQFTs to a wider class of groupoids, we proceed as follows. A quasi-symplectic groupoid is called abelianizable if it is Morita equivalent to abelian symplectic groupoid. A notion of Morita transfer then implies that every abelianizable quasi-symplectic groupoid determines a TQFT. This amounts to the following result, the underlying details of which are given in Section <ref>.
Every abelianizable quasi-symplectic groupoid completes to a commutative Frobenius object in whose product _μ∈(×, ) and unit _η∈(⋆, ) are induced by groupoid multiplication and identity bisection in the abelianization, respectively. It thereby determines a TQFT η_:_2⟶.
In this context, one naturally seeks sufficient conditions for a quasi-symplectic groupoid X to be abelianizable. This leads us to introduce the notion of an admissible global slice. We define it to be a submanifold S X that intersects every orbit in X transversely in a singleton, such that the isotropy group _x is abelian for all x∈ S. We show X to be abelianizable if there exists an admissible global slice S X with the property that the 3-form on X pulls back to an exact 3-form on S.
§.§ Affinizations of TQFTs
We now work exclusively over ℂ. It is useful to specialize Main Theorem <ref> as follows. Let G be a connected semisimple affine algebraic group with Lie algebra . Write T^*G for the pullback of T^*G^* to the regular locus ^*. One may verify that T^*G is Morita equivalent to 𝒵_G⟶𝔠Spec((Sym)^G), the universal centralizer of G. This fact implies that T^*G is abelianizable. By means of Main Theorem <ref>, T^*G turns out to determine the open Moore–Tachikawa TQFT. As mentioned above, Ginzburg and Kazhdan affinize some of the morphisms in the image of this TQFT<cit.>. These affinizations are subsequently shown to satisfy the relations of a TQFT, despite not necessarily being of finite type.
Our second main result is that the aforementioned affinization process applies in far greater generality, and that the TQFT relations essentially follow from Main Theorem <ref>. To this end, we enlarge to what we call the algebraic Moore–Tachikawa category ; it is very loosely described as a category with affine symplectic groupoids as objects, and morphisms consisting of isomorphism classes of affine Poisson schemes with Hamiltonian actions of product groupoids. Our previously developed machinery of scheme-theoretic coisotropic reduction <cit.> allows one to compose morphisms in .
We then devise an affinization process which sends 1-shifted Lagrangian correspondences in to Hamiltonian Poisson schemes in .
In light of the above, we introduce the notion of a Hartogs abelianizable affine symplectic groupoid X. By this, we mean that X is abelianizable when restricted to an open subset U X whose complement has codimension at least two in X.
We then show that the composition of the TQFT associated to |_U in Main Theorem <ref> with the affinization process gives the structure of a commutative Frobenius object in .
A key ingredient in the proof is that U X satisfies Hartogs' theorem, enabling us to complete the relevant structures on |_U to .
In more detail, our second main result is as follows.
Let X be an affine symplectic groupoid that is abelianizable over an open subset U X whose complement has codimension at least two in X.
Then completes to a commutative Frobenius object in whose product _μ∈(×, ) and unit _η∈(⋆, ) are the affinizations of (|_U)_μ and (|_U)_η in Main Theorem <ref>, respectively.
In this way, determines a TQFT η_:_2⟶.
In analogy with the discussion following Main Theorem <ref>, one should seek sufficient conditions for an affine symplectic groupoid to be Hartogs abelianizable. This leads us to introduce the notion of an admissible Hartogs slice; it is defined somewhat analogously to an admissible global slice. We show that an affine symplectic groupoid admitting an admissible Hartogs slice is Hartogs abelianizable.
Let us again consider the cotangent groupoid T^*G^* of a connected complex semisimple affine algebraic group G. The universal centralizer witnesses T^*G^* as Hartogs abelianizable. Main Theorem <ref> therefore applies and yields the Moore–Tachikawa TQFT. Perhaps surprisingly, Main Theorem <ref> turns out to yield a strictly larger class of TQFTs. The following are some details in this direction.
One might seek conditions on an complex affine algebraic group G for T^*G^* to be Hartogs abelianizable. This leads us to introduce the notion of a Moore–Tachikawa group, i.e. a complex affine algebraic group G with Lie algebra that satisfies the following properties:
* the set ^* of regular elements has a complement of codimension at least two in ^*;
* the stabilizer subgroup G_ξ is abelian for all ξ∈^*;
* the pullback of the cotangent groupoid T^*G to ^* is abelianizable.
This holds, in particular, if there exists an affine slice S ^* for the coadjoint action on the regular locus.
Results of Kostant <cit.> imply that every complex reductive group is Moore–Tachikawa. On the other hand, Main Theorem <ref> implies that every Moore–Tachikawa group induces a TQFT. These considerations motivate one to find examples non-reductive Moore–Tachikawa groups. We provide one such example in Section <ref>. We also obtain examples of TQFTs arising from certain Slodowy slices, e.g. a Slodowy slice to the minimal nilpotent orbit in 𝔰𝔩_n.
§.§ Organization
Each section begins with a summary of its contents. In Section <ref>, we develop pertinent results and techniques regarding shifted Lagrangians. This leads to Section <ref>, where we define and discuss the 1-shifted Weinstein symplectic “category" and its completion . We then prove Main Theorem 1 in Section <ref>.
Our attention subsequently turns to the matter of affinizing TQFTs. In Section <ref>, we use scheme-theoretic coisotropic reduction to define and study the category . Main Theorem <ref> is also proved in Section <ref>. Section <ref> is then devoted to the implications of Main Theorem <ref> for constructing the Moore–Tachikawa TQFT. In Sections <ref> and <ref>, we use Main Theorem <ref> to produce TQFTs different from those of Moore–Tachikawa. Section <ref> provides a TQFT for an affine symplectic groupoid integrating a Slodowy slice to the minimal nilpotent orbit in 𝔰𝔩_n. Section <ref> then provides an example of a non-reductive Moore–Tachikawa group.
§.§ Acknowledgements
We thank Ana Bălibanu, Damien Calaque, Zoïk Dubois, and Tom Gannon for highly useful discussions. P.C. and M.M were partially supported by the Simons Foundation Grant MPS-TSM-00002292 and NSERC Discovery Grant RGPIN-2023-04587, respectively.
§ 1-SHIFTED LAGRANGIAN CORRESPONDENCES
In this section, we review background material on quasi-symplectic groupoids, shifted Lagrangians, and Morita equivalence in a differential-geometric context, following <cit.>. We also provide proofs of certain basic, technical facts which we have not been able to find in the literature.
We work in the smooth category throughout this section. If one makes the obvious adjustments to terminology (e.g. replacing real and smooth with complex and holomorphic, respectively), all results hold in the holomorphic category as well.
§.§ Notation and conventions on Lie groupoids
Given a Lie groupoid , the manifolds of arrows and objects are denoted and , respectively. One has source and target maps (, ) :, multiplication : ⟶, inversion : ⟶, and the identity section : ⟶. The Lie algebroid of is A__*, together with ρ__*|_A_ as its anchor map.
Consider a Lie groupoid and smooth map μ:N⟶. The pullback of by μ is denoted μ^*; it is a groupoid with arrows (μ^*) = N μμ N and objects (μ^*) = N. A sufficient condition for μ^* to be a Lie groupoid is for ∘_ : N μ→ to be a surjective submersion, where _: N μ⟶ is the canonical map.
The restriction of a groupoid to a subset S is the groupoid |_S ^-1(S) ∩^-1(S), i.e. the pullback of by the inclusion S.
For an action of a Lie groupoid on a manifold M, the action groupoid is denoted ⋉ M.
It has source map (g, p) = p and target map (g, p) = g · p.
§.§ Quasi-symplectic groupoids
Quasi-symplectic groupoids feature prominently in this manuscript. For the sake of completeness and self-containment, we recall their definition below.
A quasi-symplectic groupoid is a triple (, ω, ϕ) of a Lie groupoid , 2-form ω on , and a 3-form ϕ on , such that dω = ^*ϕ - ^*ϕ, = 2, and ω_x ∩_* ∩_* = 0 for all x ∈.
The IM-form of (, ω, ϕ) is the map
σ_ω : A_⟶ T^*, a↦^*(i_aω).
We write ^- for the quasi-symplectic groupoid (, -ω, -ϕ).
A symplectic groupoid is a quasi-symplectic groupoid whose 2-form ω is non-degenerate and whose 3-form ϕ is trivial.
The central purpose of quasi-symplectic groupoids is that they are presentations of 1-shifted symplectic stacks <cit.>.
§.§ 1-shifted Lagrangians
Let us recall the following crucial definition.
A 1-shifted Lagrangian on a quasi-symplectic groupoid (, ω, ϕ) is a Lie groupoid morphism μ : Ł⟶, together with a 2-form γ on Ł, such that the following conditions are satisfied:
*
μ^*ω = ^*γ - ^*γ;
*
μ^*ϕ = -dγ;
*
the map
A_Ł {(v, a) ∈ TŁ⊕μ^*A_ : μ_*(v) = ρ_(a) and i_vγ = μ^* σ_ω(a)}
b (ρ_Ł(b), μ_*(a))
is an isomorphism.
See <cit.> for the equivalence between this definition and the original one in <cit.>.
The notion of 1-shifted Lagrangians is closely related to that of Hamiltonian -spaces, as defined by Xu <cit.>. In more detail, let a quasi-symplectic groupoid act on a manifold M. A 1-shifted Lagrangian structure on the projection ⋉ M → is precisely a 2-form on M for which M is a Hamiltonian -space.
Given two quasi-symplectic groupoids _1 and _2, a 1-shifted Lagrangian correspondence from _1 to _2 is a 1-shifted Lagrangian Ł⟶_1 ×_2^-.
We usually denote a 1-shifted Lagrangian correspondence by a span
[row sep=1.5em,between origins,column sep=3em,between origins]
Łdldr
_1 _2.
Two 1-shifted Lagrangian correspondences
[row sep=1.5em,between origins,column sep=3em,between origins]
Ł_1 dldr Ł_2 dldr
_1 _2 _3
are called transverse if their homotopy fibre product Ł_1 __2Ł_2 (see, e.g. <cit.> or <cit.>) is transverse, i.e. the two maps
[row sep=1em,column sep=2em]
Ł_1ף_2dr _2[swap]dl(, )
_2×_2
are transverse.
In this case,
[row sep=2em,between origins,column sep=4em,between origins]
Ł_1 __2Ł_2 dldr
_1 _3.
is a 1-shifted Lagrangian correspondence <cit.>; it is called the composition of Ł_1 and Ł_2, and denoted Ł_2 ∘Ł_1.
More precisely, if γ_i is the 1-shifted Lagrangian structure on Ł_i for i = 1, 2, then the 1-shifted Lagrangian structure on (<ref>) is given by _Ł_1^*γ_1 + _Ł_2^*γ_2 - __2^*ω_2, where _Ł_i : Ł_1 __2Ł_2 Ł_i and
__2 : (Ł_1 __2Ł_2) = Ł_1μ_1_2μ_2Ł_2_2
are the natural projections.
§.§ Morita equivalences
A morphism of Lie groupoids φ : ⟶̋ is called a Morita morphism <cit.> (a.k.a. surjective equivalence <cit.> or hypercover <cit.>) if the underlying map on objects φ : ⟶ is a surjective submersion, and the induced morphism ⟶̋(φ)^* is a Lie groupoid isomorphism.
A Morita equivalence between two Lie groupoids _1 and _2 consists of a Lie groupoid $̋ and Morita morphisms_1 ⟵⟶̋_2.
Morita equivalence is an equivalence relation on the set of Lie groupoids.
A weaker notion is that of an essential equivalence, consisting of a Lie groupoid morphismφ: such that∘_ : φ →is a surjective submersion and→̋(φ)^*is a Lie groupoid isomorphism.
Two Lie groupoids_1and_2are Morita equivalent if and only if there is a Lie groupoid$̋ together with essential equivalences _1 ⟵⟶̋_2 (see e.g. <cit.>).
Hence, Morita morphisms and essential equivalences induce the same equivalence relation on the set of Lie groupoids and can be used interchangeably.
We use mainly Morita morphisms in this paper, following <cit.>, but both approaches are equivalent.
The notion of Morita equivalence gives rise to the notion of a symplectic Morita equivalence between two quasi-symplectic groupoids (_1, ω_2, ϕ_1) and (_2, ω_2, ϕ_2), i.e. a Morita equivalence _1 ⟵⟶̋_2 that is also a 1-shifted Lagrangian correspondence.
Two quasi-symplectic groupoids are symplectically Morita equivalent if and only if they present isomorphic 1-shifted symplectic stacks <cit.>.
By <cit.>, the non-degeneracy condition <ref> in Definition <ref> is automatic: a Morita equivalence _1 φ_1⟵φ_2⟶_2 is symplectic if and only if there is a 2-form γ on such that φ_1^*ω_1 - φ_2^*ω_2 = ^*γ - ^*γ and φ_1^*ϕ_1 - φ_2^*ϕ_2 = -dγ.
The following example therefore follows automatically.
For every quasi-symplectic groupoid , the identity maps ⟵⟶ and trivial 2-form constitute a symplectic Morita equivalence.
An alternative but equivalent <cit.> approach to symplectic Morita equivalence is via Hamiltonian bibundles <cit.>, i.e. two quasi-symplectic groupoids _1 and _2 are symplectically Morita equivalent if and only if there is manifold M endowed with a 2-form and commuting Hamiltonian actions of _1 and _2^- such that M →_2 is a principal _1-bundle and M →_1 is a principal _2-bundle.
Such a bibundle gives rise to a symplectic Morita equivalence via the action groupoid (_1 ×_2) ⋉ M together with the two projections to _1 and _2.
A Lagrangian Morita equivalence between two 1-shifted Lagrangians (Ł_1, γ_1) ⟶ (_1, ω_1, ϕ_1) and (Ł_2, γ_2) ⟶ (_2, ω_2, ϕ_2) is a 2-commutative diagram
[column sep=4em,between origins,row sep=2em,between origins]
[swap]dlψ_1ddνdrψ_2
Ł_1 [swap]ddμ_1[Rightarrow,shorten=6pt,swap]drθ_1 Ł_2 ddμ_2[Rightarrow,shorten=6pt]dlθ_2
dlφ_1[swap]drφ_2
_1 _2,
such that (φ_1, φ_2) is a symplectic Morita equivalence with respect to some 2-form δ on , (ψ_1, ψ_2) is a Morita equivalence of Lie groupoids, and
ψ_1^*γ_1 - ψ_2^*γ_2 = ν^*δ - θ_1^*ω_1 + θ_2^*ω_2.
In this case, we say that Ł_1 and Ł_2 are Lagrangially Morita equivalent.
This happens if and only if Ł_1 and Ł_2 present isomorphic 1-shifted Lagrangians on the 1-shifted symplectic stacks presented by _1 and _2 <cit.>.
In the special case where (_1, ω_1, ϕ_1) = (_2, ω_2, ϕ_2), we call this a weak equivalence of 1-shifted Lagrangians. This amounts to declaring two 1-shifted Lagrangians (Ł_1, γ_1) ⟶ (, ω, ϕ) and (Ł_2, γ_2) ⟶ (, ω, ϕ) to be weakly equivalent if there a 2-commutative diagram
[row sep=3em,between origins,column sep=3em,between origins]
[swap]dlψ_1drψ_2
Ł_1 [swap]drμ_1[shorten=14pt,Rightarrow]rrθ Ł_2 dlμ_2
such that (ψ_1, ψ_2) is a Morita equivalence of Lie groupoids and ψ_2^*γ_2 - ψ_1^*γ_1 = θ^*ω.
Recall that the action of ×^- on by left and right multiplications is Hamiltonian in the sense of Xu <cit.>.
It follows from Example <ref> that (×) ⋉ is a symplectic Morita equivalence from to .
Under the equivalence between Hamiltonian bibundles and symplectic Morita equivalences (Remark <ref>), ⟵ (×) ⋉⟶ is weakly equivalent to the 1-shifted Lagrangian correspondence ⟵⟶ of Example <ref>.
An explicit essential equivalence can be obtained by the morphism (×) ⋉, g ((g, g), _(g)).
If _1 and _2 are symplectically Morita equivalent quasi-symplectic groupoids, then for every 1-shifted Lagrangian Ł_1 → G_1 there is a 1-shifted Lagrangian Ł_2 →_2 together with a Lagrangian Morita equivalence between them <cit.>.
Moreover, Ł_2 →_2 is unique up to weak equivalences.
We call this process of transferring 1-shifted Lagrangians from _1 to _2 Morita transfer.
The next three results are important for the construction of the 1-shifted Weinstein symplectic category in the next section.
Consider a 2-commutative diagram of Lie groupoid morphisms
[row sep=1.5em,between origins,column sep=3em,between origins]
_1 [swap]dddr _2 [swap]dldd
dd
Ł_1 [swap]dr Ł_2 dl
,
where the vertical arrows are Morita morphisms.
Then Ł_1 _Ł_2 is transverse if and only if _1 __2 is transverse.
In this case, the induced map _1 __2 ⟶Ł_1 _Ł_2 is a Morita morphism.
Introduce the following labels:
[row sep=2em,between origins,column sep=4em,between origins]
_1 [swap]ddψ_1drν_1 _2 [swap]dlν_2ddψ_2
ddφ[Rightarrow, shorten=8pt,swap]dlθ_1[Rightarrow, shorten=8pt]drθ_2
Ł_1 [swap]drμ_1 Ł_2 dlμ_2
,
where θ_i is a natural transformation from φν_i to μ_i ψ_i, i.e.
θ_i = φν_i, θ_i = μ_i ψ_i, and θ_i((g_i)) ·φ(ν_i(g_i)) = μ_i(ψ_i(g_i)) ·θ_i((g_i))
for all g_i ∈_i and i=1,2.
Choose an Ehresmann connection τ on <cit.>; this is a right splitting of the short exact sequence
[column sep=3em]
0 r ^* A_rR Tr_* ^*Tr 0,
where R is right translation.
Let be the corresponding adjoint representation up to homotopy <cit.>; see also <cit.> for an overview in the same notation as this proof. We have the quasi-action of on T and A_ given by
_g : T_(g) T_(g), _g(v) = _*(τ_g(v))
_g : (A_)_(g) (A_)_(g), _g(a) = τ̌(a^L_g)
for all g ∈, where τ̌ is the left splitting of (<ref>) corresponding to τ.
As in <cit.>, we denote the basic curvature of τ by K ∈Γ(^2; (^*T, ^*A_)).
We also set
θ̇_i τ̌∘θ_i* : T_iψ_2^*μ_2^*A_,
for i=1,2, so that the θ̇_i provide chain homotopies for the maps of chain complexes from A__i⟶ T_i to A_⟶ T <cit.>.
Suppose that _1 __2 is transverse.
Let (x_1, g, x_2) ∈ (Ł_1 _Ł_2), i.e. (μ_1(x_1), μ_2(x_2)) = ((g), (g)).
Take (u_1, u_2) ∈ T_(g)× T_(g).
Let y_i ∈_i be such that ψ_i(y_i) = x_i.
Then (g) = μ_1(x_1) = μ_1(ψ_1(y_1)) = (θ_1(y_1)) and (g) = μ_2(x_2) = μ_2(ψ_2(y_2)) = (θ_2(y_2)).
Moreover, (θ_1(y_1)) = φ(ν_1(y_1)) and (θ_2(y_2)) = φ(ν_2(y_2)).
It follows that
(ν_1(y_1), θ_2(y_2)^-1 g θ_1(y_1), ν_2(y_2))
∈
(φ)^*.
Since ≅̋(φ)^*, there is a unique h ∈$̋ such that(h) = ν_1(y_1),(h) = ν_2(y_2), andφ(h) = θ_2(y_2)^-1 g θ_1(y_1).
Noting thatu_1 ∈T_(g)= T_(θ_1(y_1)), there exists a uniquev_1 ∈T_(θ_1(y_1))= T_(φ(h)) such that_θ_1(y_1)(v_1) = u_1.
A similar argument reveals the existence of a uniquev_2 ∈T_(θ_2(y_2))= T_(φ(h))such that_θ_2(y_2)(v_2) = u_2.
We also haveφ(ν_i(y_i)) = (θ_i(y_i)), and knowφto be a submersion. One therefore hasw_i ∈T_ν_i(y_i)such thatφ_*(w_i) = v_i.
At the same time,(y_1, h, y_2) ∈(_1 __2)and(w_1, w_2) ∈T_(h)×T_(h).
Since_1 __2is transverse, there existδ_i ∈T_y_i_iandγ∈T_hsuch that
(w_1, w_2) = (ν_1*(δ_1), ν_2*(δ_2)) + (_*(γ), _*(γ)).
Letg̃ θ_2(y_2)^-1 g θ_1(y_1).
Using the fact thatT_g̃= R_g̃((A_)_(g̃)) ⊕τ_g̃(T_(g̃)), we can write
φ_* γ = a^R_g̃ + τ_g̃(b)
fora ∈(A_)_θ_2(y_2)andb ∈T_θ_1(y_1).
Applying_*to both sides of (<ref>) yields_* φ_* (γ) = b.
It follows that
φ_* (γ) = a^R_g̃ + τ_g̃_* φ_*(γ).
Furthermore, applying_*to both sides of (<ref>) gives
ρ(a) = _* φ_* (γ) - _g̃_* φ_*(γ).
We claim that
(u_1, u_2) = (μ_1* (δ̃_1) + _* (γ̃), μ_2* (δ̃_2) + _* (γ̃)),
where
γ̃ τ_g _θ_1(y_1)φ_*(γ) + (θ̇_1(δ_1))^L_g + (_θ_2(y_2)(a) -θ̇_2 (δ_2) - K(g, θ_1(y_1))(φ_*(γ)) + K(θ_2(y_2), g̃)(φ_* (γ)))^R_g
δ̃_1 ψ_1*(δ_1)
δ̃_2 ψ_2*(δ_2).
To this end, <cit.> implies that
μ_1*(δ̃_1) + _*(γ̃)
=
μ_1ψ_1*(δ_1) + _θ_1(y_1)φ_*(γ) - ρθ̇_1(δ_1)
= _θ_1(y_1)φ_*ν_1*(δ_1) + _θ_1(y_1)φ_*(γ)
= _θ_1(y_1)φ_*(w_1)
= u_1.
At the same time, <cit.>, (<ref>), and the equivariance ofρtell us that
μ_2*(δ̃_2) + _*(γ̃)
=
μ_2 ψ_2* (δ_2) + _g _θ_1(y_1)_* φ_*(γ) + ρ(_θ_2(y_2)(a) - θ̇_2 (δ_2) - K(g, θ_1(y_1))(_* φ_* (γ)) + K(θ_2(y_2), g̃)(_* φ_* (γ)))
= _θ_2(y_2)φ_* ν_2* (δ_2) + _g _θ_1(y_1)_* φ_*(γ) + _θ_2(y_2) (_*φ_*(γ) - _g̃_* φ_*(γ))
- ρ(K(g, θ_1(y_1))(_* φ_* (γ))) + ρ(K(θ_2(y_2), g̃)(_* φ_* (γ)))
=
_θ_2(y_2)φ_*ν_2*(δ_2) + _θ_2(y_2)_* φ_*(γ)
= u_2.
It follows thatŁ_1 _Ł_2is transverse.
Suppose now thatŁ_1 _Ł_2is transverse.
Let(y_1, h, y_2) ∈(_1 __2), i.e.(ν_1(y_1), ν_2(y_2)) = ((h), (h)).
Let(v_1, v_2) ∈T_(h)×T_(h).
Note that for all(u_1, u_2) ∈(φ_*)_(h) ×(φ_*)_(h), we have(u_1, u_2) = (_*(w), _*(w)), wherew = (u_1, 0, u_2) ∈T_h≅T_(h)×_T T_φ(h)×_T T_(h).
It therefore suffices to show that
φ_* (v_1) = φ_*(ν_1*(u_1) + _*(w))
and φ_* (v_2) = φ_*(ν_2*(u_2) + _*(w))
for someu_i ∈T_y_i_iandw ∈T_h.
In other words, it suffices to show that
φ_*(v_1 - ν_1*(u_1)) = _* γ and φ_*(v_2 - ν_2*(u_2)) = _* γ,
for someu_i ∈T_y_i_iandγ∈T_φ(h).
Note that(φ(h)) = φ(ν_2(y_2)) = (θ_2(y_2))and(φ(h)) = φ(ν_1(y_1)) = (θ_1(y_1)), so thatg θ_2(y_2) ·φ(h) ·θ_1(y_1)^-1 ∈is well-defined.
Letx_1 ψ_1(y_1)andx_2 ψ_2(y_2), noting that(μ_1(x_1), μ_2(x_2)) = ((g), (g)).
We then have(φ_*(v_1), φ_*(v_2)) ∈T_θ_1(y_1)×T_θ_2(y_2). It follows that(_θ_1(y_1)φ_*(v_1), _θ_2(y_2)φ_*(v_2)) ∈T_(g)×T_(g).
SinceŁ_1 _Ł_2is transverse, there existũ_1 ∈T_x_1Ł_1,ũ_2 ∈T_x_2Ł_2, andw ∈T_gsuch that
(_θ_1(y_1)φ_*(v_1), _θ_2(y_2)φ_*(v_2))
=
(μ_1*(ũ_1) + _*(w), μ_2*(ũ_2) + _*(w)).
Chooseu_i ∈T_y_i_isuch thatψ_i*(u_i) = ũ_i.
As in the first part, we can write
w = a^R_g + τ_g _* (w),
wherea ∈(A_)_(g)andρ(a) = _*(w) - _g _* (w).
Consider the element ofT_φ(h)defined by
γ τ_φ(h)_θ_1(y_1)^-1_*(w)
+ (_θ_2(y_2)^-1(a + θ̇_2 (u_2)) + K(θ_2(y_2)^-1, g)(_* (w)) - K(φ(h), θ_1(y_1)^-1)(_*(w)))^R_φ(h)
-K(θ_2(y_2)^-1, θ_2(y_2)(φ_*(v_2 - ν_2*(u_2))))^R_φ(h)
+ (K(θ_1(y_1)^-1, θ_1(y_1))(φ_*(v_1 - ν_1*(u_1))) - _θ_1(y_1)^-1θ̇_1(u_1))^L_φ(h).
We have
_* γ = _θ_1(y_1)^-1_*(w) - ρ(K(θ_1(y_1)^-1, θ_1(y_1))(φ_*(v_1 - ν_1*(u_1)))) + ρ(_θ_1(y_1)^-1θ̇_1(u_1))
= _θ_1(y_1)^-1_* (w) - _θ_1(y_1)^-1_θ_1(y_1)φ_*(v_1 - ν_1*(u_1)) + φ_*(v_1 - ν_1*(u_1)) + _θ_1(y_1)^-1ρθ̇_1(u_1)
= _θ_1(y_1)^-1_* (w) - _θ_1(y_1)^-1_θ_1(y_1)φ_*(v_1 - ν_1*(u_1)) + φ_*(v_1 - ν_1*u_1) + _θ_1(y_1)^-1(μ_1*ψ_1*u_1 - _θ_1(y_1)φ_*ν_1*u_1)
= _θ_1(y_1)^-1_θ_1(y_1)φ_*(v_1) - _θ_1(y_1)^-1_θ_1(y_1)φ_*(v_1 - ν_1*(u_1)) + φ_*(v_1 - ν_1*(u_1)) - _θ_1(y_1)^-1_θ_1(y_1)φ_*ν_1*(u_1)
= φ_*(v_1 - ν_1*(u_1))
and
_*γ = _φ(h)_θ_1(y_1)^-1_*(w) + _θ_2(y_2)^-1(_*(w) - _g_*(w) + μ_1*ψ_1*(u_2) - _θ_2(y_2)φ_*ν_2*(u_2))
+_θ_2(y_2)^-1_g _*(w) - _θ_2(y_2)^-1g_*(w) - _φ(h)_θ_1(y_1)^-1_*(w) + _φ(h)θ_1(y_1)^-1_*(w)
- _θ_2(y_2)^-1_θ_2(y_2)(φ_*(v_2 - ν_2*(u_2))) + φ_*(v_2 - ν_2*(u_2))
= φ_*(v_2 - ν_2*(u_2)).
It follows that_1 __2is transverse.
In this case, we have a Lie groupoid morphism
_1 __2 Ł_1 _Ł_2
(l_1, h, l_2) (ψ_1(l_1), θ_2((l_2)) ·φ(h) ·θ_1((l_1))^-1, ψ_2(l_2)).
To see that this is a Morita morphism, we first show that the map on objects is a surjective submersion. This is accomplished by establishing the existence of local sections. We first note that≅̋φ^*. It follows that(_1 __2)= _1×_φν_1, ×_, φν_2 _2.
The map on objects must therefore take the form
φ̃ : _1×_φν_1, ×_, φν_2_2 Ł_1×_μ_1, ×_, μ_2Ł_2
(y_1, g, y_2) (ψ_1(y_1), θ_2(y_2) · g ·θ_1(y_1)^-1, ψ_2(y_2)).
Letσ_ibe local sections ofψ_i : _i→Ł_i.
The map(x_1, g, x_2) (σ_1(x_1), θ_2(σ_2(x_2))^-1 g θ_1(σ_1(x_1)), σ_2(x_2))is a local section of (<ref>).
A straightforward computation then shows that the induced map_1 __2 ⟶φ̃^*(Ł_1 _Ł_2)is a diffeomorphism.
Consider 1-shifted Lagrangian correspondences
[row sep=2em,between origins,column sep=4em,between origins]
Ł_1 dldr Ł_2 dldr
_1 _2 _3
and
[row sep=2em,between origins,column sep=4em,between origins]
Ł_1' dldr Ł_2' dldr
_1' _2' _3'
,
where _1 ⟵Ł_1 ⟶_2 is Lagrangially Morita equivalent to _1' ←Ł_1' →_2', and _2 ⟵Ł_2 ⟶_3 is Lagrangially Morita equivalent to _2' ←Ł_2' →_3'.
Then (<ref>) is transverse if and only if (<ref>) is transverse.
In this case, the compositions
[row sep=3em,between origins,column sep=5em,between origins]
Ł_1 __2Ł_2 dldr
_1 _3
and [row sep=3em,between origins,column sep=5em,between origins]
Ł_1' __2'Ł_2' dldr
_1' _3'
are Lagrangially Morita equivalent.
Lemma <ref> implies the following: (<ref>) is transverse if and only if (<ref>) is transverse, in which case the homotopy fibre products Ł_1 __2Ł_2 and Ł_1' __2'Ł_2' are Morita equivalent as Lie groupoids.
It remains to check compatibility with the 1-shifted Lagrangian structures.
We have a 2-commutative diagram
[row sep=7em,between origins,column sep=7em,between origins]
(_1, ω_1, ϕ_1) (Ł_1, γ_1) [swap]lμ_11rμ_12[Rightarrow,shorten=]dlθ_11[Rightarrow,shorten=]drθ_12 (_2, ω_2, ϕ_2) (Ł_2, γ_2) [swap]lμ_22rμ_23[Rightarrow,shorten=]dlθ_22[Rightarrow,shorten=]drθ_23 (_3, ω_3, ϕ_3)
(_̋1, δ_1) [swap]uφ_1dφ_1' _1 [swap]lν_11rν_12[swap]uψ_1dψ_1' (_̋2, δ_2) [swap]uφ_2dφ_2' _2 [swap]lν_22rν_23[swap]uψ_2dψ_2' (_̋3, δ_3) [swap]uφ_3dφ_3'
(_1', ω_1', ϕ_1') (Ł_1', γ_1') lμ_11'[swap]rμ_12'[Rightarrow,shorten=,swap]ulθ_11'[Rightarrow,shorten=,swap]urθ_12' (_2', ω_2', ϕ_2') (Ł_2', γ_2') lμ_22'[swap]rμ_23'[Rightarrow,shorten=,swap]ulθ_22'[Rightarrow,shorten=,swap]urθ_23' (_3', ω_3', ϕ_3')
of Lagrangian Morita equivalences.
In other words, the vertical maps are Morita morphisms and the following hold:
φ_i^*ω_i - φ_i'^*ω_i' = ^*δ_i - ^*δ_i for i = 1, 2, 3 ;
φ_i^*ϕ_i - φ_i'^*ϕ_i' = -dδ_i for i = 1, 2, 3 ;
ψ_1^*γ_1 - ψ_1'^*γ_1' = ν_11^*δ_1 - ν_12^*δ_2 - θ_11^*ω_1 + θ_12^*ω_2 + θ_11'^*ω_1' - θ_12'^*ω_2';
ψ_2^*γ_2 - ψ_2'^*γ_2' = ν_22^*δ_2 - ν_23^*δ_3 - θ_22^*ω_2 + θ_23^*ω_3 + θ_22'^*ω_2' - θ_23'^*ω_3'.
Taking homotopy fibre products in (<ref>) yields the 2-commutative diagram
[row sep=6em,between origins,column sep=14em,between origins]
(_1, ω_1, ϕ_1) (Ł_1 __2Ł_2, _Ł_1^*γ_1 + _Ł_2^*γ_2) [swap]lμ_11_Ł_1rμ_23_Ł_2[Rightarrow,shorten=]dlθ_11__1[Rightarrow,shorten=]drθ_23__2 (_3, ω_3, ϕ_3)
(_̋1, δ_1) [swap]uφ_1dφ_1' _1 __̋2_2 [swap]lν_11__1rν_23__2[swap]uψdψ' (_̋2, δ_2) [swap]uφ_3dφ_3'
(_1', ω_1', ϕ_1') (Ł_1' __2'Ł_2', _Ł_1'γ_1' + _Ł_2'γ_2') lμ_11'_Ł_1'[swap]rμ_23'_Ł_2'[Rightarrow,shorten=,swap]ulθ_11' __1[Rightarrow,shorten=,swap]urθ_23' __2 (_3', ω_3', ϕ_3')
,
where
ψ(m_1, h_2, m_2) = (ψ_1(m_1), θ_22((m_2))^-1·φ_2(h_2) ·θ_12((m_1)), ψ_2(m_2))
ψ'(m_1, h_2, m_2) = (ψ_1'(m_1), θ_22'((m_2))^-1·φ_2'(h_2) ·θ_12'((m_1)), ψ_2'(m_2))
for all (m_1, h_2, m_2) ∈_1 __̋2_2.
By Lemma <ref>, ψ and ψ' are Morita morphisms.
Since _Ł_iψ = ψ_i __i and _Ł_i'ψ' = ψ_i' __i for i = 1, 2, we have
ψ^*(_Ł_1^*γ_1 + _Ł_2^*γ_2)
-
ψ'^*(_Ł_1'^*γ_1' + _Ł_2'^*γ_2')
=
__1^*(ψ_1^*γ_1 - ψ_1'^*γ_1') + __2^*(ψ_2^*γ_2 - ψ_2'^*γ_2')
=
__1^*(ν_11^*δ_1 - ν_12^*δ_2 - θ_11^*ω_1 + θ_12^*ω_2 + θ_11'^*ω_1' - θ_12'^*ω_2')
+__2^*(ν_22^*δ_2 - ν_23^*δ_3 - θ_22^*ω_2 + θ_23^*ω_3 + θ_22'^*ω_2' - θ_23'^*ω_3')
=
(ν_11__1)^*δ_1 - (ν_23__2)^*δ_2
-(θ_11__1)^*ω_1
+(θ_23__2)^*ω_3
+(θ_11'__1)^*ω_1'
-(θ_23'__2)^*ω_3';
The second equality follows from (<ref>) and (<ref>), and last equality follows from the fact that ν_12__1 = ν_22__2 on _1 __̋2_2.
We conclude that (<ref>) is a Lagrangian Morita equivalence.
Let
[row sep=2em,between origins,column sep=4em,between origins]
Łdldr
_1 _2,
be a 1-shifted Lagrangian correspondence. Regard _1 ⟵_1 ⟶_1 as a 1-shifted Lagrangian correspondence via the trivial 2-form on _1.
Then the pair
[row sep=2em,between origins,column sep=4em,between origins]
_1 dldr Łdldr
_1 _1 _2
is transverse, and the composition
[row sep=3em,between origins,column sep=5em,between origins]
_1 __1Łdldr
_1 _2
is weakly equivalent to (<ref>).
The analogous statement for composition on the right by _2 ⟵_2 ⟶_2 also holds.
Denote the quasi-symplectic structure on _i by (ω_i, ϕ_i) for i = 1, 2, and the 1-shifted Lagrangian structure on (<ref>) by γ.
Let us also denote also the two morphisms in (<ref>) by μ_1:Ł⟶_1 and μ_2:Ł⟶_2.
To establish composability in (<ref>), we need to show that the maps
_1ף[swap]dr×μ_1 _1dl(, )
_1×_1
are transverse.
Let g ∈_1 be such that ((g), (g)) = (x, μ_1(y)) for some (x, y) ∈_1ף, and let (u, v) ∈ T_(g)_1× T_(g)_1.
Since the target map of a Lie groupoid is a submersion, we can write v = _* (w) for some w ∈ T_g_1.
It follows that (u, v) = (_*(u - _*(w)), μ_1*(0)) + (_*(w), _*(w)), proving transversality.
We also get that the projection
_Ł : _1 __1ŁŁ
is a Morita morphism, fitting into a 2-commutative diagram
[row sep=5em,between origins,column sep=5em,between origins]
[Rightarrow,shorten=42pt]ddr__1 _1 __1Ł[swap]dl__1drμ_2_Łdd_Ł
_1 _2
Łulμ_1[swap]urμ_2 ,
where __1 : __1⇒μ_1 _Ł is the projection (_1 __1Ł) = _1_1μ_1Ł→_1.
In the framework of weak equivalence of 1-shifted Lagrangians as in (<ref>), we can write (<ref>) as
[row sep=5em,between origins,column sep=5em,between origins]
_1 __1Ł[swap]dldr_Ł
_1 __1Ł[swap]dr__1×μ_1 _Ł[Rightarrow,shorten=20pt]rr__1×μ_2_Ł Łdlμ_1 ×μ_2
_1 ×_2^-.
By the definition of composition, the 1-shifted Lagrangian structure on the composition (<ref>) is given by __1^*0 + _Ł^*γ - __1^*ω_1.
The statement that (<ref>) is a Lagrangian Morita equivalence then amounts to the identity
_Ł^*γ - ^*(__1^*0 + _Ł^*γ - __1^*ω_1) = (__1×μ_2 _Ł)^*(__1^*ω_1 - __2^*ω_2).
This identity from the fact that ^*ω_2 = 0 for a multiplicative form ω_2.
§.§ Strong fibre products and transversality
Consider 1-shifted Lagrangian correspondences
[row sep=2em,between origins,column sep=4em,between origins]
Ł_1 [swap]dldr Ł_2 [swap]dldr
_1 _2 _3.
It is sometimes useful to consider the strong fibre productŁ_1 __2 Ł_2, defined as the standard set-theoretical fibre product on both arrows and objects.
We say that the 1-shifted Lagrangian correspondences (<ref>) are strongly transverse if the fibre product on arrows is transverse.
As in <cit.>, this implies that the fibre product on object is also transverse, and that the vector bundle morphisms
[row sep=2em,between origins,column sep=4em,between origins]
A_Ł_1dr A_Ł_2dl
A__2
are transverse.
It follows that the strong fibre product is a Lie groupoid endowed with the structure of a 1-shifted Lagrangian correspondence
[row sep=3em,between origins,column sep=5em,between origins]
Ł_1 __2Ł_2 [swap]dldr
_1 _3
with respect to_Ł_1^*γ_1 + _Ł_2^*γ_2, whereγ_iare the 1-shifted Lagrangian structures onŁ_1andŁ_2, respectively <cit.>.
The following proposition shows that the homotopy and strong fibre products are equivalent in many situations.
Suppose that the 1-shifted Lagrangian correspondences (<ref>) are transverse and strongly transverse.
Suppose also that the map
Ł_1μ_12μ_22Ł_2 Ł_1μ_12_2μ_22Ł_2
(l_1, l_2) ((l_1), μ_22(l_2)^-1μ_12(l_1), (l_2))
is a surjective submersion.
Then the homotopy fibre product Ł_1 __2Ł_2 and the strong fibre product Ł_1 __2Ł_2 are weakly equivalent as 1-shifted Lagrangian correspondences from _1 to _3.
We have a commutative diagram
[row sep=3em,between origins, column sep=4em,between origins]
Ł_1 __2Ł_2 rrψdr Ł_1 __2Ł_2 dl
_1 ×_3^-,
where
ψ : Ł_1μ_12μ_22Ł_2 Ł_1μ_12_2μ_22Ł_2
(l_1, l_2) (l_1, _μ_12((l_1)), l_2) and
ψ : Ł_1μ_12μ_22Ł_2 Ł_1μ_12_2μ_22Ł_2
(x_1, x_2) (x_1, _μ_12(x_1), x_2).
Note that ψ^*(_Ł_1^*γ_1 + _Ł_2^*γ_2 - __2^*ω_2) = _Ł_1^*γ_1 + _Ł_2^*γ_2, as ^*ω_2 = 0.
It suffices to check that ψ is an essential equivalence.
This amounts to checking the following.
*
The map
∘_(Ł_1 __2Ł_2) :
(Ł_1μ_12μ_22Ł_2) ψ
(Ł_1μ_12_2μ_22Ł_2) Ł_1μ_12_2μ_22Ł_2
((x_1, x_2), (l_1, g, l_2)) ((l_1), μ_22(l_2) · g ·μ_12(l_1)^-1, (l_2))
is a surjective submersion.
*
The map (Ł_1 __2Ł_2)→ (ψ)^*(Ł_1 __2Ł_2) is a diffeomorphism.
To show <ref>, note that we have a map
Ł_1μ_12μ_22Ł_2
(Ł_1μ_12μ_22Ł_2) ψ
(Ł_1μ_12_2μ_22Ł_2)
(l_1, l_2) (((l_1), (l_2)), (l_1, _μ_12((l_1)), l_2)),
whose composition with (<ref>) is (<ref>).
Since (<ref>) is a surjective submersion, so is (<ref>).
For <ref>, we need to check that the map
Ł_1μ_12μ_22Ł_2 (Ł_1μ_12μ_22Ł_2) ψ (Ł_1μ_12_2μ_22Ł_2) ψ (Ł_1μ_12μ_22Ł_2)
(l_1, l_2) (((l_1), (l_2)), (l_1, _μ_12((l_1)), l_2), ((l_1), (l_2)))
is a diffeomorphism.
But it has an explicit inverse, given by ((x_1, x_2), (l_1, g, l_2), (y_1, y_2)) (l_1, l_2).
§ THE 1-SHIFTED WEINSTEIN SYMPLECTIC CATEGORY
We begin this section with a rough overview of the Weinstein symplectic “category" and its Wehrheim–Woodward completion. This gives context for our subsequent definition of the1-shifted Weinstein symplectic “category". Using the basic approach of Wehrheim–Woodward to the Weinstein symplectic category, we completeto a symmetric monoidal category. One may work in the smooth or holomorphic categories, as with the previous section.
§.§ The Weinstein symplectic “category"
Two morphisms in a category can be composed if and only if the source of one coincides with the target of the other. By weakening this to a necessary condition for composing morphisms, one obtains the definition of a “category”.[Some authors would instead call this a precategory.] Two morphisms in a “category" are called composable if their composition is defined. A prominent instance of this discussion is Weinstein's symplectic “category”<cit.>; its objects are symplectic manifolds, and its morphisms are Lagrangian correspondences. While one can compose two Lagrangian correspondences as relations between sets, the result need not be a Lagrangian correspondence. These correspondences are called composable if they satisfy transversality conditions sufficient to ensure that their set-theoretic composition is a Lagrangian correspondence.
While the Weinstein symplectic “category” is not a genuine category, Wehrheim–Woodward <cit.> show that it can be completed into one by defining morphisms as sequences of composable morphisms up to a certain equivalence relation. We implement a similar approach for the 1-shifted version of the Weinstein symplectic “category".
§.§ The 1-shifted Weinstein symplectic “category”.
We define a 1-shifted version of Weinstein's symplectic “category”, denoted, as follows.
Quasi-symplectic groupoids constitute the objects of. A morphism from_1to_2inis a weak equivalence class of 1-shifted Lagrangian correspondences from_1to_2.
We say that two morphisms
[row sep=2em,between origins,column sep=4em,between origins]
Ł_1 dldr Ł_2 dldr
_1 _2 _3
inare composable if their homotopy fibre productŁ_1 __2 Ł_2is transverse.
In this case, we define the compositionŁ_2 ∘Ł_1as the weak equivalence class ofŁ_1 __2 Ł_2.
Proposition <ref> implies that this morphism does not depend on the representatives chosen for the weak equivalence classes being composed.
Letbe a quasi-symplectic groupoid. The identity morphism_:⟶is the canonical 1-shifted Lagrangian correspondence⟵⟶(Example <ref>). The content of this statement is that every morphismŁ: ⟶$̋ in is composable with _ and _$̋, andŁ∘_≃Ł≃_∘̋Ł; see Lemma <ref>.
§.§ Extension of to a symmetric monoidal category
One may extendto a symmetric monoidal categoryin two ways. The first is a manifold-theoretic extension afforded by Wehrheim–Woodward; see <cit.> and <cit.>. The second, due to Calaque <cit.>, is algebro-geometric; one replaces algebraic quasi-symplectic groupoids by their associated 1-shifted symplectic stacks, and uses derived fibre products to form a symmetric monoidal category.
The precise way in which we extendto a categoryis irrelevant since all computations will be done on composable morphisms in.
We therefore only need to prove the existence of an extension.
We do this in the differential-geometric context by adapting the Wehrheim–Woodward approach, as we now explain.
The Wehrheim–Woodward approach is as follows.
An object ofis a quasi-symplectic groupoid.
A morphism fromto'is a sequence of quasi-symplectic groupoids_0, _1, …, _rwith_0 = and_r = ', together with 1-shifted Lagrangian correspondences_i-1 ⟵Ł_i ⟶_ifori = 1, …, r, up to the equivalence relation generated by
(
[row sep=1.5em,between origins,column sep=2em,between origins]
Ł_i ddlddr Ł_i+1ddlddr
⋯ ⋯
_i-1 _i _i+1 )
∼ (
[row sep=1.5em,between origins,column sep=2em,between origins]
Ł_i __iŁ_i+1ddlddr
⋯ ⋯
_i-1 _i+1 )
ifŁ_i __i Ł_i+1is transverse and
(
[row sep=1.5em,between origins,column sep=2em,between origins]
Ł_i ddlddr
⋯ ⋯
_i-1 _i )
∼ (
[row sep=1.5em,between origins,column sep=2em,between origins]
Ł_i' ddlddr
⋯ ⋯
_i-1 _i )
ifŁ_iandŁ_i'are weakly equivalent.
As in <cit.>, this forms a category with composition given by the concatenation of sequences.
The categoryturns out to carry a symmetric monoidal structure
⊗ : ×.
It is given by the Cartesian product on the level of objects.
For two morphisms
[
[row sep=2em,between origins,column sep=2em,between origins]
Ł_1 dldr Ł_2 dldr Ł_r dldr
_0 _1 ⋯ _r
]
and [
[row sep=2em,between origins,column sep=2em,between origins]
_1 dldr _2 dldr _s dldr
_̋0 _̋1 ⋯ _̋s
],
one takes the tensor product as follows. Augment the morphism of smallest length with identity morphisms on its right, until to its length matches that of other morphism. Proceed to take Cartesian products of quasi-symplectic groupoids_i ×_̋iand 1-shifted LagrangiansŁ_i ×_i. Using the more concise notation[Ł_1, …, Ł_r]and[_1, …, _s]for the morphisms, assumingr ≤s, and using juxtaposition to indicate Cartesian products, one has
[Ł_1, …, Ł_r] ⊗ [_1, …, _s] = [Ł_1 _1, …, Ł_r_r, _r_r+1, …, _r_s].
The right-hand side of (<ref>) does not depend on the representatives chosen for the two morphisms being composed.
The right-hand side of (<ref>) it clearly invariant under (<ref>).
To establish invariance under (<ref>), let 1 ≤ i < r be such that Ł_i and Ł_i+1 are transverse. It follows that
[Ł_1, …, Ł_i, Ł_i+1, …, Ł_r]
=
[Ł_1, …, Ł_i ∘Ł_i+1, …, Ł_r].
We need to check that both presentations of this morphism yield the same tensor product with [_1, …, _s], i.e. that
[Ł_1_1, …, Ł_i_i, Ł_i+1_i+1, …, Ł_r_r, _r_r+1, …, _r_s]
= [Ł_1_1, …, (Ł_i∘Ł_i+1)_i, Ł_i+2_i+1, …, Ł_r_r-1, _r_r, …, _r_s].
By inserting identities, (<ref>) is equal to
[Ł_1_1, …, Ł_i_i, Ł_i+1_̋i_(Ł_i ∘Ł_i+1)_i, _i+1_i+1, Ł_i+2_̋i+1_Ł_i+2_i+1, …, _r-1_r-1, Ł_r_̋r-1_Ł_r_r-1, _r_r, …, _r_s].
Subsequently performing composition in the other set of pairs
[Ł_1_1, …, Ł_i_i, Ł_i+1_̋i, _i+1_i+1_Ł_i+1_i+1, Ł_i+2_̋i+1, …_Ł_i+2_i+2…, _r-1_r-1_Ł_r-1_r-1, Ł_r_̋r-1, _r_r_Ł_r_r, …, _r_s],
we get back (<ref>).
This completes the proof.
The unit object inis a point⋆, viewed as a quasi-symplectic groupoid. Given quasi-symplectic groupoids,$̋, and , the associator α_,,̋ : × (×̋) ⟶ (×)̋× is the groupoid ××̋, regarded as a 1-shifted Lagrangian correspondence from × (×̋) to (×)̋×.
The left identity λ_ : ⋆×⟶ is , as a 1-shifted Lagrangian correspondence. A similar description applies for the right identity ρ_ : ×⋆⟶.
The preceding discussion makes it clear that (<ref>) is a monoidal structure on .
To address the symmetric structure, let and $̋ be quasi-symplectic groupoids. The braiding×⟶̋×̋is the groupoid×$̋ with trivial 2-form on its base and the obvious morphisms to ×$̋ and×̋.
The axioms of a symmetric monoidal category are immediate.
§ TQFTS VALUED IN THE 1-SHIFTED WEINSTEIN SYMPLECTIC CATEGORY
This section is largely concerned with proving Main Theorem <ref>. We begin by recalling the equivalence between2-dimensional TQFTs in a symmetric monoidal category and commutative Frobenius objects in the same category. This framework allows us to prove that every abelian symplectic groupoid determines a commutative Frobenius object in, whose product is induced by groupoid multiplication. To extend this result to a larger class of quasi-symplectic groupoids, we call a quasi-symplectic groupoid abelianizable if it Morita equivalent to an abelian symplectic groupoid. We then use a notion of Morita transfer to prove that every abelianizable quasi-symplectic groupoid determines a commutative Frobenius object in. To conclude, we prove that a quasi-symplectic groupoid admitting an admissible global slice is necessarily abelianizable. All constructions work overℝorℂ, as in the previous two sections.
§.§ 2-dimensional TQFTs and commutative Frobenius objects
Let us first recall the equivalence between 2-dimensional TQFTs and commutative Frobenius objects. A standard reference for this material is the book <cit.>; see also <cit.>, <cit.>, and <cit.>.
Let_2be the category of 2-dimensional cobordisms, i.e. objects are compact 1-dimensional oriented manifolds and morphisms are cobordisms between them.
Suppose that(𝐂, ⊗, I, B)is a symmetric monoidal category, where⊗is the monoidal product,Iis the unit object, andBis the brading.
A 2-dimensional topological quantum field theory valued in 𝐂 is a symmetric monoidal functor
_2 𝐂.
Such a functor is more easily described via the notion of commutative Frobenius object in𝐂, as we now recall. One finds that_2is generated by the six morphisms
[
baseline=-2.5pt,
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius= 2pt,
tqft/boundary separation=0.6cm,
tqft/view from=incoming,
]
[
tqft/cap,
name=d,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
cobordism height= 1cm,
];
(d-outgoing boundary)+(-0.12,-1) node η;
[
baseline=6pt,
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius= 2pt,
tqft/boundary separation=0.6cm, tqft/view from=incoming,
]
[
tqft/reverse pair of pants,
name=d,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
cobordism height= 1cm,
];
(d-outgoing boundary)+(-0.5,-1) node μ;
[
baseline=-3pt,
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius= 2pt,
tqft/boundary separation=0.6cm, tqft/view from=incoming,
]
[
tqft/cylinder,
name=d,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
cobordism height= 1cm,
];
(d-outgoing boundary)+(-0.5,-1) node ι;
[
baseline=-3pt,
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius= 2pt,
tqft/boundary separation=0.6cm, tqft/view from=incoming,
]
[
tqft/pair of pants,
name=d,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
cobordism height= 1cm,
];
(d-outgoing boundary)+(-0.5,-0.75) node δ;
[
baseline=-2.5pt,
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius=2pt,
tqft/boundary separation=0.6cm,
tqft/view from=incoming,
]
[
tqft/cup,
name=d,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
cobordism height=1cm,
];
(d-incoming boundary)+(0.12,-1) node ϵ;
[
baseline=5pt,
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius= 2pt,
tqft/cobordism height=1cm, tqft/view from=incoming,
tqft/boundary separation=1cm,
]
[
tqft/cylinder to next,
name=d,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
cobordism edge/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
];
[
tqft/cylinder to prior,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((d-incoming boundary)+(0, 0.5))];
(d-outgoing boundary)+(-0.5,-1.25) node τ;
read from left to right (e.g.η: ∅⟶S^1), subject to the relations
[
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius=2pt,
tqft/boundary separation=0.5cm, tqft/view from=incoming,
]
[
tqft/cap,
name=a,
cobordism height=1cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
];
[
tqft/cylinder,
name=b,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((a-outgoing boundary)+(-0.75,0.5)) ,
];
[
tqft/reverse pair of pants,
name=c,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((a-outgoing boundary)+(0,0)) ,
];
(c-outgoing boundary)+(0.5,0) node ;
[
tqft/cylinder,
name=e,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((c-outgoing boundary)+(1,0)) ,
];
(c-outgoing boundary)+(2.25,0) node ;
[
tqft/cap,
name=ar,
cobordism height=1cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((4.25, 0.5)),
];
[
tqft/cylinder,
name=br,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((ar-outgoing boundary)+(-0.75,-0.5)) ,
];
[
tqft/reverse pair of pants,
name=cr,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((ar-outgoing boundary)+(0,-0.5)) ,
];
[
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius=2pt,
tqft/boundary separation=0.5cm, tqft/view from=incoming,
]
[
tqft/pair of pants,
name=a,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
];
[
tqft/cylinder,
name=b,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((a-outgoing boundary)+(0,0.5)) ,
];
[
tqft/cup,
name=c,
cobordism height=1cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((a-outgoing boundary)+(0,0)) ,
];
(b-outgoing boundary)+(0.5,-0.25) node ;
[
tqft/cylinder,
name=e,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((c-outgoing boundary)+(0.75,-0.25)) ,
];
(e-outgoing boundary)+(0.5,0) node ;
[
tqft/pair of pants,
name=ar,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((c-outgoing boundary)+(2.5,-0.25)) ,
];
[
tqft/cylinder,
name=br,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((ar-outgoing boundary)+(0,0)) ,
];
[
tqft/cup,
name=cr,
cobordism height=1cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((ar-outgoing boundary)+(0,0.5)) ,
];
[
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius=2pt,
tqft/boundary separation=0.5cm, tqft/view from=incoming,
]
[
tqft/cylinder to prior,
name=a,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
boundary separation=1cm,
at=((c-outgoing boundary)+(-0.75,0.75)),
];
[
tqft/cylinder to next,
name=b,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
boundary separation=1cm,
at=((c-outgoing boundary)+(-0.75,0.25)) ,
];
[
tqft/reverse pair of pants,
name=c,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((a-outgoing boundary)+(0,0)) ,
];
(c-outgoing boundary)+(0.5,0) node ;
[
tqft/reverse pair of pants,
name=d,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((c-outgoing boundary)+(1,-0.25)) ,
];
[
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius=2pt,
tqft/boundary separation=0.5cm, tqft/view from=incoming,
]
[
tqft/pair of pants,
name=a,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((a-outgoing boundary)+(0,0)) ,
];
[
tqft/cylinder to prior,
name=b,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
boundary separation=1cm,
at=((a-outgoing boundary)+(0,0.5)),
];
[
tqft/cylinder to next,
name=c,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
boundary separation=1cm,
at=((a-outgoing boundary)+(0,0)) ,
];
(b-outgoing boundary)+(0.5,0.25) node ;
[
tqft/pair of pants,
name=d,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((c-outgoing boundary)+(1,-0.25)) ,
];
[
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius=2pt,
tqft/boundary separation=0.5cm, tqft/view from=incoming,
]
[
tqft/pair of pants,
name=a,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
];
[
tqft/cylinder to prior,
name=b,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((a-outgoing boundary)+(-0.75,-0.25)) ,
];
[
tqft/cylinder to prior,
name=c,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((a-outgoing boundary)+(0,0.5)) ,
];
[
tqft/reverse pair of pants,
name=d,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((a-outgoing boundary)+(0,-0.5)) ,
];
(c-outgoing boundary)+(0.5,-0.25) node ;
[
tqft/reverse pair of pants,
name=e,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((a-outgoing boundary)+(1.75,-0.25)) ,
];
[
tqft/pair of pants,
name=f,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((e-outgoing boundary)+(0,0)) ,
];
(f-outgoing boundary)+(0.5,0.25) node ;
[
tqft/pair of pants,
name=ar,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((f-outgoing boundary)+(1,0)) ,
];
[
tqft/cylinder to next,
name=br,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((ar-outgoing boundary)+(-0.75,0.75)) ,
];
[
tqft/reverse pair of pants,
name=cr,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((br-outgoing boundary)+(0,-0.5)) ,
];
[
tqft/cylinder to next,
name=cr,
cobordism height=0.75cm,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
at=((ar-outgoing boundary)+(0,0)) ,
];
and additional relations capturing the fact that the cylinderιis the identity.
A 2-dimensional topological quantum field theory valued𝐂is thereby equivalent to the following: a choice of objectX ∈𝐂and four morphisms
X_η : I → X (unit),
X_μ : X ⊗ X → X (multiplication),
X_δ : X → X ⊗ X (comultiplication),
X_ϵ : X → I (counit),
subject to the relations indicated by (<ref>)–(<ref>), where the cylinderιis mapped to the identity_X : X ⟶Xand the twistτis mapped to the braidingB_X : X ⊗X ⟶X ⊗X<cit.>.
For example, (<ref>) translates to the requirement thatX_μ∘B_X = X_μ.
A tuple(X,X_η,X_μ,X_δ,X_ϵ)of an objectXin𝐂and morphismsX_η, X_μ, X_δ, X_ϵsatisfying the analogues of (<ref>)–(<ref>) in𝐂is called a commutative Frobenius object in 𝐂.
§.§ The case of abelian symplectic groupoids
The following notion will be instrumental to constructing commutative Frobenius objects in.
A groupoid is abelian if = and all isotropy groups are abelian.
Our strategy is to first construct a commutative Frobenius object infor each abelian symplectic groupoid, and subsequently use Morita transfer to get a TQFT for every abelianizable quasi-symplectic groupoid.
Let(, ω) Nbe an abelian symplectic groupoid.
The multiplication morphism_μ∈_(×, )is given by multiplication in, i.e. we define_μto be the set* of composable arrows, together with the morphisms
[row sep=3em,between origins,column sep=4em,between origins]
* [hook']dldr
× .
Sinceis abelian, the set* is a Lie groupoid overN.
The multiplicativity ofωimplies that_μis a 1-shifted Lagrangian correspondence from×to, with respect to the trivial 2-form onN.
Similarly, we define comultiplication_δ∈_(, ×)via
[row sep=3em,between origins,column sep=4em,between origins]
* [swap]dl[hook]dr
×.
The unit_η∈_(⋆, )is given by the identity section of, i.e. we define_ηto be the trivial groupoidN N, together with the morphisms
[row sep=3em,between origins,column sep=4em,between origins]
N dl[hook]dr
⋆ .
It follows that_ηis a 1-shifted Lagrangian correspondence with respect to the trivial 2-form onN.
Similarly, the counit_ϵ∈_(, ⋆)is the trivial groupoid overN, together with
[row sep=3em,between origins,column sep=4em,between origins]
N [hook',swap]dldr
⋆.
We now verify that_η, _μ, _δ, _ϵsatisfy the relations (<ref>)–(<ref>) defining a commutative Frobenius object in.
Adopt the notation^*n * ⋯* (ntimes) for the set ofn-composable arrows.
Relations (<ref>) and (<ref>) hold in for _η, _μ, _δ, _ϵ. For the first identity in (<ref>), we get that the 1-shifted Lagrangian correspondences
[row sep=3em,between origins,column sep=4em,between origins]
N ×[swap]dl_[hook]dr× * [hook']dldr
×
are transverse, and that their composition is weakly equivalent to the identity ⟵⟶.
Similar statements hold for the other three identities.
We verify the first identity in (<ref>) only; the others are handled similarly.
Transversality in (<ref>) is the statement that the maps
[row sep=0em]
N^2 × N r N^2 × N^2 ^2 l
(x, y, z) [mapsto]r (x, y, z, z)
((a), (b), (a), (b)) (a, b) [mapsto]l
are transverse.
We show that the intersection is weakly equivalent to the strong fibre product via Proposition <ref>.
Strong transversality amounts to the maps
[row sep=0em]
N ×r ^2 ^*2l
(x, a) [mapsto]r (x, a), (b, c) (b, c) [mapsto]l
being transverse; this is clear.
Moreover, the map (<ref>) is the map
^*3 ^*2
(a, b, c) (b^-1, c^-1a)
which is also clearly a surjective submersion.
The composition in (<ref>) is then weakly equivalent to the strong fibre product (N ×) _^2 ( * ) ≅.
Relations (<ref>) and (<ref>) hold in for _η, _μ, _δ, _ϵ.
More precisely, (<ref>) is the statement that the 1-shifted Lagrangian correspondences
[row sep=3em,between origins,column sep=4em,between origins]
×[swap]dldrswap * [hook']dldr
× ×
are transverse, and their composition is weakly equivalent to (<ref>).
A similar statement holds for (<ref>).
We verify (<ref>) only; the case of (<ref>) is similar.
Transversality in (<ref>) amounts to the fact that the maps in (<ref>) are transverse.
Strong transversality is also clear.
With a view to applying Proposition <ref>, we note that (<ref>) corresponds to the surjective submersion
^*4 ^*2
(a, b, c, d) (c^-1b, d^-1a).
We conclude that the composition in (<ref>) is weakly equivalent to the strong fibre product (×) _^2 ( * ) = {((a, b), (c, d)) ∈ (×) × ( * ) : (b, a) = (c, d)}.
The latter is isomorphic to *, together with the maps ×← * →, (a, b) (a, b) ↦ ba = ab, i.e. to _μ.
Relations (<ref>) hold in for _η, _μ, _δ, _ϵ.
More precisely, the first relation in (<ref>) is the statement that the pairs of 1-shifted Lagrangian correspondences
[row sep=3em,between origins,column sep=4em,between origins]
( * ) ×[swap]dl×[hook]dr × ( * ) [hook']dldr×
× ×× ×
and
[row sep=3em,between origins,column sep=4em,between origins]
* [hook']dldr * [swap]dl[hook]dr
× ×
are both transverse, and that their compositions are weakly equivalent.
A similar statement holds for the second identity.
We show that both of these compositions are weakly equivalent to ^2, 2 using Proposition <ref>. Let us begin with (<ref>).
Transversality is the statement that the maps
[row sep=0em]
N^2 × N^2 r N^3 × N^3 ^3 l
((x, y), (u, v)) [mapsto]r ((x, x, y), (u, v, v))
(((a), (b), (c)), ((a), (b), (c))) (a, b, c) [mapsto]l
are transverse.
Strong transversality is the statement that the maps
[row sep=0em]
^*2×r ^3 ×^*2l
((a, b), c) [mapsto]r (a, b, c) (a, (b, c)) [mapsto]l
are transverse.
The map (<ref>) in Proposition <ref> corresponds to
^*3 * ^*3 ^*3
((a_1, b_1, c_1), (a_2, b_2, c_2)) (a_2^-1a_1, b_2^-1b_1, c_2^-1c_1),
which is a surjective submersion.
It follows that the composition in (<ref>) is weakly equivalent to the strong fibre product (( * ) ×) _^3 (× ( * )).
The latter can be identified with * *, together with the maps
[row sep=3em,between origins,column sep=4em,between origins]
* * dldr
× × [row sep=3em,between origins,column sep=4em,between origins]
(a, b, c) [mapsto]dl[mapsto]dr
(ab, c) (a, bc).
We now consider (<ref>).
Transversality amounts to the maps
N × N r N × N l(, )
being transverse.
Strong transversality amounts to the maps
[row sep=0em]
^*2r ^*2l
(a, b) [mapsto]r ab
cd (c, d) [mapsto]l
being transverse.
The map (<ref>) in Proposition <ref> corresponds to
^*4
(a, b, c, d) abc^-1d^-1,
a surjective submersion.
The composition in (<ref>) is therefore weakly equivalent to the strong fibre product ( * ) _ ( * ) = {(a_1, a_2, b_1, b_2) ∈^*4 : a_1 a_2 = b_1 b_2}.
There is an isomorphism from (<ref>) to the latter given by (a, b, c) (ab, c, a, bc).
If is an abelian symplectic groupoid, then (,_η,_μ,_δ,_ϵ) is a commutative Frobenius object in . It thereby determines a 2-dimensional TQFT _2 ⟶.
This is an immediate consequence of Lemmas <ref>, <ref>, and <ref>.
By an induction argument, we see that for all (m, n) (0, 0), the TQFT in Theorem <ref> maps the genus-0 cobordism from m circles to n circles to ^m, n{(a, b) ∈^*m * ^*n : a_1 ⋯ a_m = b_1 ⋯ b_n} together with the natural projections to ^m and ^n.
§.§ Abelianizations of quasi-symplectic groupoids
We now consider the following definition.
An abelianization of a quasi-symplectic groupoid is an abelian symplectic groupoid , together with a symplectic Morita equivalence from to . A quasi-symplectic groupoid is called abelianizable if it admits an abelianization.
LetMbe a quasi-symplectic groupoid with an abelianization⟵⟶̋.
By Morita transfer (see Subsection <ref>), it follows that_η, _μ, _δ, _ϵtransfer in a unique way to morphisms inof the form_η: ⋆⟶,_μ: ^2 ⟶,_δ: ⟶^2, and_ϵ: ⟶⋆.
For example, we can realize them as the homotopy fibre products of_η,_μ,_δ, and_ϵwith^̋-,×̋×^̋-,×̋^̋- ×^̋-, and$̋, respectively.
Note that any other choice of abelianization of will give 1-shifted Lagrangians weakly equivalent to _η, _μ, _δ, _ϵ.
The corresponding morphisms in are therefore independent of the choice of abelianization.
If is an abelianizable quasi-symplectic groupoid, then (,_η, _μ, _δ, _ϵ) is a commutative Frobenius object in .
In this way, determines a 2-dimensional TQFT _2 ⟶.
This follows from Proposition <ref> and Lemmas <ref>, <ref>, and <ref>.
§.§ Global slices
We now discuss an application of Theorem <ref>.
A global slice to a Lie groupoid M is a submanifold S M intersecting every orbit exactly once and transversely. The global slice is admissible if for all x ∈ S, the isotropy group _x is abelian.
Let S X be an admissible global slice to a symplectic groupoid X. The Lie algebra of _x is necessarily abelian for all x∈ S. In this way, asking the groups _x to be abelian is a very mild constraint; it holds in many situations.
Let (, ω) (M, ϕ) be a quasi-symplectic groupoid. Suppose that S M is an admissible global slice for which i^*ϕ=dγ is exact, where i:S⟶ M is the inclusion. Then the restriction |_S=^-1(S)∩^-1(S) S is an abelianization of with respect to j^*ω - ^*γ + ^*γ, where j : ⟶ is the inclusion map.
Note that is the pullback of by i.
It follows that ∘_ : S i→ M being a surjective submersion would force to be a Lie groupoid. We would therefore like to prove that
T_xS i_*_* T_g T_(g)M, (v, w) _*(w)
is surjective for all (x, g) ∈ S i.
In other words, we would like to show that the nullity of this map is (S i) - M.
Its kernel is clearly isomorphic to {(v, w) ∈ T_xS × (A_)_x : v = ρ(w)}.
Since T_xM = T_xS + ρ_x, it has dimension S + (A_)_x - M = (S i) - M.
It follows that is a Lie groupoid, and that the inclusion j : ⟶ is an essential equivalence.
It follows that is a quasi-symplectic groupoid with respect to (j^*ω, i^*ϕ).
Since i^*ϕ = dγ is exact, the gauge transformation <cit.> (, j^*ω - ^*γ + ^*γ, 0) is a quasi-symplectic groupoid with trivial background 3-form symplectically Morita equivalent to .
Moreover, since S is a global slice, we have _ = _.
It follows that ρ_ = 0, so that integrates the zero Poisson structure. This implies that is indeed a symplectic groupoid.
It follows that (, j^*ω - ^*γ + ^*γ) is an abelianization of (, ω, ϕ).
It follows from Theorem <ref> that together with the slice S determine a TQFT
η_ : _2 .
Recall that the original Moore–Tachikawa TQFT associated to a complex semisimple group G is characterised by the fact that [
baseline=-2.5pt,
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius= 2pt,
tqft/boundary separation=0.6cm,
tqft/view from=incoming,
]
[
tqft/cup,
name=d,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
cobordism height= 1cm,
]; is sent to the Hamiltonian G-space G ×𝒮, where 𝒮 is a Kostant slice.
This Hamiltonian space can also be described as ^-1(𝒮), where is the source map of the symplectic groupoid T^*G ^*.
Using the correspondence of Example <ref> between Hamiltonian spaces and 1-shifted Lagrangians, the TQFTs (<ref>) generalize the Moore–Tachikawa example in the following way.
The morphism η_([
baseline=-2.5pt,
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius= 2pt,
tqft/boundary separation=0.6cm,
tqft/view from=incoming,
]
[
tqft/cup,
name=d,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
cobordism height= 1cm,
];
) = _ϵ is the 1-shifted Lagrangian on associated with the Hamiltonian -space ^-1(S), where acts by left multiplication.
Let |_S be the abelianization of .
Recall that _ϵ is the trivial groupoid S S viewed as a 1-shifted Lagrangian in .
A straightforward computation shows that S S is also a 1-shifted Lagrangian in .
Since the inclusion is an essential equivalence, it follows that _ϵ is the 1-shifted Lagrangian S.
Recall from Example <ref> that the identity morphism ⟵⟶ is weakly equivalent to the action groupoid ⟵ (×) ⋉⟶, where × acts on by left and right multiplication.
Hence, by Lemma <ref>, the 1-shifted Lagrangian S is weakly equivalent to its composition with ⟵ (×) ⋉⟶ on the left.
This composition is easily seen to be the 1-shifted Lagrangian ⋉^-1(S).
§ THE AFFINIZATION PROCESS
In contrast to the preceding sections, we now work exclusively over ℂ. We first review some pertinent material from our manuscript on scheme-theoretic coisotropic reduction. This leads to a definition and study of Hartogs Morita morphisms. We next discuss a processes by which to affinize 0-shifted symplectic stacks and 1-shifted Lagrangians. We then define what it means for an affine symplectic groupoid to be Hartogs abelianizable. By Main Theorem <ref>, such a groupoid determines a TQFT in .
On the other hand, we introduce the algebraic Moore–Tachikawa category as well as an affinization process taking algebraic 1-shifted Lagrangian correspondences in to morphisms in .
The proof of Main Theorem <ref> is subsequently given. To conclude, we prove that an affine symplectic groupoid admitting an admissible Hartogs slice is necessarily Hartogs abelianizable.
§.§ Preliminaries
We briefly review some conventions from <cit.>. The term algebraic groupoid is used for a groupoid object in the category of complex algebraic varieties.
An algebraic Lie groupoid is an algebraic groupoid with smooth arrow and object varieties, as well smooth morphisms for its source and target maps.
By an affine algebraic Lie groupoid, we mean an algebraic Lie groupoid whose object and arrow varieties are affine. Such a groupoid is called an affine symplectic groupoid if carries an algebraic symplectic form for which the graph of multiplication is coisotropic in ××^-.
The affine quotient associated to an algebraic groupoid X is the affine scheme
X [X]^,
where [X] is the algebra of morphisms f : X ⟶, and [X]^ is the subalgebra of those satisfying ^*f = ^*f. The pullback of X by a morphism μ : Y ⟶ X is the fibre product
μ^* Y μμ Y.
It is an algebraic groupoid over Y.
If X is an algebraic Lie groupoid and μ : Y ⟶ X is a smooth morphism, then μ^* is also an algebraic Lie groupoid.
§.§ Morita and Hartogs Morita morphisms
A Morita morphism between algebraic Lie groupoids X and Y is a morphism of algebraic groupoids (f, μ) : (Y) ⟶ ( X) for which μ : Y ⟶ X is a surjective smooth morphism, and the induced morphism
μ^*, h ((h), f(h), (h))
is an isomorphism. It is also advantegeous to introduce the weaker notion of a Hartogs Morita morphism between affine algebraic Lie groupoids X and Y; we define it to be morphism of algebraic groupoids (f, μ) : (Y) ⟶ ( X) such that μ : Y ⟶ X is a smooth morphism, the open subset U μ X has a complement of codimension at least two in X, and the induced morphism
μ^*
is an isomorphism.
The key property of Hartogs Morita morphisms is that they preserve affine quotients, as the next result shows.
If ( X) ⟶ (Y) is a Hartogs Morita morphism, then the induced map X ⟶ Y $̋ is an isomorphism of affine schemes.
It suffices to consider the case in which =̋μ^* for X an affine algebraic Lie groupoid and μ : Y ⟶ X a smooth morphism whose image U μ has a complement of codimension at least two in X. Composing with μ then defines an injective algebra morphism [X]^⟶[Y]^μ^*. We are reduced to showing that this morphism is surjective.
Suppose that f ∈[Y]^μ^*.
Since μ is smooth, there is an étale covering {π_i:U_i ⟶ U}_i∈ I with local sections {σ_i : U_i ⟶ Y}_i∈ I of μ.
We will constuct a function F ∈[X]^ mapping to f by considering the functions F_i σ_i^*f : U_i ⟶, and subsequently using étale descent.
To this end, fix i,j∈ I, let U_ij U_i ×_X U_j, and write p_1 : U_ij⟶ U_i and p_2 : U_ij⟶ U_j for the projections. We have (p_1^*F_i)(x, y) = f(σ_i(x)) = f(σ_j(y)) = (p_2^*F_j)(x, y)
for all (x, y) ∈ U_ij; the second equality follows from μ^*-invariance of f, as ((σ_i(x), σ_j(y)), 1_π_i(x)) ∈μ^*. We conclude that p_1^* F_i = p_2^* F_j.
Since (-, ) is a sheaf in the étale topology, there is a function F : U ⟶ satisfying π_i^*F = F_i for all i∈ I.
Our assumption on codimension then implies that F extends uniquely to an element F ∈[X].
To see that μ^*F = f, suppose that y ∈ Y.
Since {π_i:U_i ⟶ U}_i∈ I is a cover, there exists a point x in some U_i with π_i(x) = μ(y).
Note that (μ^*F)(y) = F(μ(y)) = F(π_i(x)) = F_i(x) = f(σ_i(x)) = f(y), where the last equality uses the μ^*-invariance of f and the fact that μ(σ_i(x)) = π_i(x) = μ(y). This establishes that μ^*F=f.
It remains only to establish that F ∈[X]^. Given g ∈|_U^-1(U)∩^-1(U), we have (g) = π_i(x) and (g) = π_j(y) for some x ∈ U_i and y ∈ U_j. This implies that (σ_i(x), σ_j(y), g) ∈μ^*. By the μ^*-invariance of f, we have F((g)) = F(π_i(x)) = f(σ_i(x)) = f(σ_j(y)) = F(π_j(y)) = F((g)). Since |_U is open in , it follows that F ∈[X]^, completing the proof.
By replacing Morita morphisms with Hartogs Morita morphisms, we also obtain notions of Hartogs Morita equivalence, Hartogs symplectic Morita equivalence, and Hartogs Lagrangian Morita equivalence.
§.§ Affine Hamiltonian schemes
An affine Poisson scheme is the data of an affine schemeXand a Poisson bracket onℂ[X]. Recall that a closed subscheme ofXis called coisotropic if its ideal is a Poisson subalgebra ofℂ[X].
LetXbe an affine symplectic groupoid acting algebraically on an affine Poisson schemeM, via a mapμ: M ⟶X.
We say that the action is Hamiltonian if the graph of the action morphismμ M Mis coisotropic in×M ×M^-.
We say that an affine Poisson schemeMis an affine Hamiltonian -scheme if it comes equipped with a Hamiltonian action of.
An isomorphism of affine Hamiltonian-schemes is an equivariant isomorphism of affine Poisson schemes.
§.§ Affinization of 0-shifted symplectic stacks
LetXbe an affine algebraic Lie groupoid.
A 0-shifted symplectic structure onXis a closed2-formωof constant rank onXsatisfying^*ω= ^*ωandω= ρ, whereρis the anchor map.
We show that in this case, the affine quotientX is Poisson.
First note that we have a short exact sequence of vector bundles
0 ω TX ω 0.
We define the Poisson structure explicitly as follows.
Note that for allf ∈[X]^,df ∈(ρ)^∘= ω.
Since every short exact sequence of vector bundles on an affine variety splits, we can choose a global vector fieldX_fonXsuch thati_X_fω= df, unique up toω.
The Poisson bracket on[X]^is then given by
{f, g}ω(X_f, X_g).
Equation (<ref>) defines an affine Poisson scheme structure on X.
Let R be a vector subbundle of TX such that TX = ρ⊕ R.
We then have an isomorphism ω : R ⟶ω = (ρ)^∘.
It follows that every -invariant morphism f : X ⟶ determines a unique section X_f of R that satisfies df = ω(X_f).
The Poisson bracket {f, g} = ω(X_f, X_g) is independent of the choice of R, as any other choice gives vector fields differing from X_f and X_g by elements of ω.
We need to show that {f, g} is -invariant. To this end, fix a ∈ and let X̂_f, X̂_g ∈ T_a be mapped by to X_f and X_g, respectively.
Then ^*(ω(X_f)_(a)) = (^*df)_a = (^*df)_a = (^*(ω(X_f)))_a = (^*ω)_a(X̂_f) = (^*ω)_a(X̂_f) = ^*(ω(X̂_f)_(a)).
It follows that X_f - X̂_f ∈ω.
We therefore have ^*{f, g}(a) = ω_(a)(X_f, X_g) = ω_(a)(X̂_f, X̂_g) = (^*ω)_a(X̂_f, X̂_g) = (^*ω)_a(X̂_f, X̂_g) = ω_(a)(X_f, X_g) = ^*{f, g}(a).
The Jacobi identity follows from the closedness of ω.
A symplectic Hartogs Morita morphism between0-shifted symplectic affine algebraic Lie groupoids(X, ω)and (X̃, ω̃)is a Hartogs Morita morphism(f, μ) : ( X̃) →(X)of the underlying affine algebraic Lie groupoids, such thatμ^*ω= ω̃.
We now observe that the affine Poisson scheme of a 0-shifted symplectic groupoid is invariant under symplectic Hartogs Morita morphisms.
Let X and X̃ be 0-shifted affine symplectic groupoids. If there exists a symplectic Hartogs Morita morphism ⟶, then the Poisson schemes X and X̃ are isomorphic.
Let μ : X̃⟶ X be the map on objects coming from a Hartogs Morita morphism ⟶.
By Proposition <ref>, the map μ^* : [X]^→[X̃]^ is an algebra isomorphism. We also have μ^*(i_μ(X_μ^*f)ω) = i_X_μ^*fμ^*ω = i_X_μ^*fω̃ = dμ^*f = μ^*df
for all f∈ℂ[X].
Since μ is a submersion, we have i_μ(X_μ^*f)ω = df for all f∈ℂ[X]. It follows that μ(X_μ^*f) = X_f for all f∈ℂ[X], implying that μ^* is also a Poisson algebra morphism.
Let (X, ω) be an 0-shifted symplectic affine algebraic Lie groupoid and Y X a smooth closed subvariety such that TY^ω = TY + ρ.
Let I [X] be the ideal corresponding to Y and set J I ∩[X]^.
Then J is a Poisson ideal and hence corresponds to a coisotropic subvariety of X.
The condition that TY^ω = TY + ρ holds, in particular, if Y is isotropic, Y = 12 X, and (ω∩ TY) = 12ω.
Let f, h ∈ J.
We have X_f|_Y ∈ TY^ω = TY + ρ and dh|_Y ∈ TY^∘∩ (ρ)^∘, so that {f, h}|_Y = ω(X_f, X_h)|_Y = -dh(X_f)|_Y = 0.
For the last statement, first note that TY + ρ = TY + ω TY^ω by isotropy of Y.
We show that this inclusion is an equality by a dimension count. Indeed, we have (TY^ω) = X - Y + (ω∩ TY) = Y + ω - (ω∩ TY) = (TY + ω).
§.§ Affinization of 1-shifted Lagrangians
The following lemma is useful.
Let X be an affine symplectic groupoid and Y an affine algebraic Lie groupoid.
Suppose that ×$̋ acts on a smooth affine varietyMwith moment map(μ, ν) : M ⟶ X × Y.
Suppose also that the projection(×)̋⋉ M →has a 1-shifted Lagrangian structureω∈Ω_M^2for whichωμ_*and the infinitesimal$̋-action on M has constant rank.
Then ω is a 0-shifted symplectic form on ⋉̋M M, and the induced Poisson scheme M $̋ is a Hamiltonian-scheme.
Let ω be the 1-shifted Lagrangian structure on (×)̋⋉ M ⟶. Note that ω is a closed 2-form on M with ^*ω - ^*ω = _^*Ω, where Ω is the symplectic form on and (, ) : (×)̋⋉ M M are the source and target maps. We also know that the map
μ^*A_⊕ν^*A_{(v, a) ∈ TM × A_ : μ_*v = ρ a and i_vω = μ^*σ_Ω a}
(a, b) (ψ_*(a, b), a)
is an isomorphism, where ψ_* is the action map.
Let (, ) : ⋉̋M M be the source and target maps of the $̋-action.
Then = ∘ iand = ∘ i, wherei : ⋉̋M → (×)̋⋉ Mis given byi(h, p) = (_μ(p), h, p).
In particular,^*ω - ^*ω = i^*(^*ω - ^*ω) = i^*_^*Ω = 0, since_∘ i : ⋉̋M →has its image in the identity section.
The non-degeneracy condition then shows that
ω∩μ_* = ρ_.̋
for the anchor mapρ_for$̋.
Since we have ωμ_* by assumption,
ω = ρ_.̋
It follows that ω is a 0-shifted symplectic form, and hence descends to a Poisson bracket on M $̋.
It remains to check that the residual action ofonM $̋ is Hamiltonian.
By Lemma <ref>, it suffices to show that the graph Γ× M × M^- of the action satisfies (η∩ TΓ) = 12η, where η (Ω, ω, -ω).
But η = {(0, u, v) : u, v ∈ω} and TΓ∩η = {(0, v, v) : v ∈ω}.
The following generalization is natural and important.
Let _1 M_1 and _2 M_2 be affine symplectic groupoids and
[row sep=3em,between origins,column sep=4em,between origins]
Ł[swap]dlf_1drf_2[shift left=2pt]d[shift right=2pt]d
_1 [shift left=2pt]d[shift right=2pt]d N dlμ_1[swap]drμ_2 _2 [shift left=2pt]d[shift right=2pt]d
M_1 M_2.
an affine 1-shifted Lagrangian correspondence.
Then the affine quotient
Ł (_1 μ_1 N μ_2_2) Ł
is a Hamiltonian _1 ×_2^--scheme, where Ł acts via l · (g_1, n, g_2) = (g_1f_1(l)^-1, (l), f_2(l) g_2) if (l) = n.
If _1 ⟵Ł' ⟶_2 is another affine 1-shifted Lagrangian correspondence that is Hartogs weakly equivalent to (<ref>), then Ł and (Ł') are isomorphic Hamiltonian _1 ×_2^--schemes.
Consider the 1-shifted Lagrangian correspondences
[row sep=3em,between origins,column sep=4em,between origins]
(_1 ×_1) ⋉_1 dldr Łdldr (_2 ×_2) ⋉_2 dldr
_1 _1 _2 _2;
see Example <ref>.
Their composition is the action groupoid
[row sep=3em,between origins,column sep=4em,between origins]
(_1 ף×_2) ⋉ (_1 μ_1 N μ_2_2) dldr
_1 _2
together with the 1-shifted Lagrangian structure given by the 2-form
ω__1^*ω_1 + 2 _N^*γ + __2^*ω_2
on _1 μ_1 N μ_2_2, where γ is the 1-shifted Lagrangian structure on Ł.
To show that the affine quotient (<ref>) is a Hamiltonian _1 ×_2^--scheme, we apply Lemma <ref>. A first step is to check that ωμ_*, where
μ : _1 μ_1 N μ_2_2 M_1 × M_2, μ(g_1, n, g_2) = ((g_1), (g_2))
is the moment map for the action of _1 ×_2 on _1 μ_1 N μ_2_2.
Let (v_1, u, v_2) ∈ T_(g_1, n, g_2)(_1 μ_1 N μ_2_2) be such that (v_1, u, v_2) ∈ω.
It follows that v_1 ∈ (_*)^ω_1 = _* and v_2 ∈ (_*)^ω_2 = _*, so that μ_*(v_1, u, v_2) = 0.
A second step is to check that the infinitesimal Ł-action on _1 μ_1 N μ_2_2 has constant rank.
The kernel of this infinitesimal action is given by f_1*∩ρ_Ł∩ f_2*; it is trivial by the definition of 1-shifted Lagrangians, i.e. the injectivity part of Definition <ref><ref>.
By Lemma <ref>, (<ref>) is a Hamiltonian _1 ×_2^--scheme.
Consider another affine 1-shifted Lagrangian correspondence
[row sep=3em,between origins,column sep=4em,between origins]
Ł' [swap]dlf_1'drf_2'[shift left=2pt]d[shift right=2pt]d
_1 [shift left=2pt]d[shift right=2pt]d N' dlμ_1'[swap]drμ_2' _2 [shift left=2pt]d[shift right=2pt]d
M_1 M_2.
that is Hartogs Morita equivalent to (<ref>).
To show that our two correspondences yield isomorphic Hamiltonian schemes, it suffices to consider the following case: there is a Hartogs Morita morphism ψ : Ł' ⟶Ł fitting into a 2-commutative diagram
[row sep=3em,between origins,column sep=3em,between origins]
Ł' [swap]dlf_1'drf_2'ddψ
_1 _2
Łulf_1[swap]urf_2
with respect to some natural transformations θ_1 : f_1ψ⟹ f_1' and θ_2 : f_2ψ⟹ f_2'.
In this case, ψ induces a Hartogs Morita morphism
Ł' ⋉ (_1 μ_1' N' μ_2'_2)
Ł⋉ (_1 μ_1 N μ_2_2)
(l, (g_1, n, g_2)) (ψ(l), (g_1 θ_1(n), ψ(n), θ_2(n)^-1 g_2)).
Proposition <ref> implies that the induced Poisson schemes are isomorphic.
Since the isomorphism is _1 ×_2^- equivariant, this provides an isomorphism of Hamiltonian _1 ×_2^--schemes.
§.§ The algebraic Moore–Tachikawa category
We now
define a categoryin which the aforementioned affinizations of TQFTs take values. Our category turns out to enlarge Moore and Tachikawa's categoryof holomorphic symplectic varieties; see Section <ref>. These considerations explain our choosing the notation.
We begin with precise definitions. The objects ofare affine symplectic groupoids. To define morphisms, suppose thatandare affine symplectic groupoids. Suppose also thatMandNare affine Poisson schemes, equipped with commuting Hamiltonian actions ofand^-. DeclareMandNto be isomorphic if there exists an affine Poisson scheme isomorphismM⟶ Nthat intertwines the actions ofand^-. Morphisms fromtoare then defined to be isomorphism classes of affine Poisson schemes carrying commuting Hamiltonian actions ofand^-. To define morphism composition, consider affine symplectic groupoids X, Y, and𝒦 Z. Let us also consider[M]∈Hom(,)and[N]∈Hom(,𝒦).
It follows that Yacts diagonally on the fibre productM ×_Y N.
By <cit.>, the affine quotient(M ×_Y N) is a Poisson scheme with commuting Hamiltonian actions ofand^-.
The Poisson structure is obtained by reduction, i.e. it is characterized by the relation{j^*f_1, j^*f_2} = j^*{f_1, f_2}for allf_1, f_2 ∈[M × N]such thatj^*f_1, j^*f_2 ∈[M ×_Y N]^, wherej : M ×_Y N M × Nis inclusion.
The previous paragraph and a routine exercise reveal thatis a category. It turns out thatalso carries a symmetric monoidal structure. One defines the tensor product of affine symplectic groupoidsandto be the affine symplectic groupoid⊗×. Given affine symplectic groupoids,,𝒦, andℒ, and morphisms[M]∈Hom(,)and[N]∈Hom(𝒦,ℒ), note thatM× Nis an affine Hamiltonian(×𝒦)×(ℐ^- ×ℒ^-)-scheme. One thereby has[M]⊗[N][M× N]∈Hom(×𝒦,×ℒ)The unit object inis the singleton groupoid{e}⟶{*}. The associator, left unitor, right unitor, and braiding are then exactly as one would expect. A routine exercise reveals thatis indeed a symmetric monoidal category.
§.§ A “functor"
Consider the sub-“category”ofconsisting of affine symplectic groupoids and affine algebraic 1-shifted Lagrangian correspondences.
We construct a “functor”
as follows.
Consider a morphism
[row sep=2em,between origins,column sep=4em,between origins]
Łdldr
_1 _2.
in.
Theorem <ref> implies thatŁis a morphism from_1to_2in.
The next proposition shows thatŁŁbehaves like a functor fromto.
The association ŁŁ has the following properties.
*
It is well-defined: if Ł_1 and Ł_2 are weakly equivalent 1-shifted Lagrangian correspondences from _1 to _2, then Ł_1 and Ł_2 are isomorphic Hamiltonian _1 ×_2^--schemes.
*
It preserves composition: if _1 ⟵Ł_1 ⟶_2 and _2 ⟵Ł_2 ⟶_3 are composable 1-shifted Lagrangian correspondences, then (Ł_2 ∘Ł_1) and Ł_2∘Ł_1 are isomorphic Hamiltonian _1 ×_3^--schemes.
*
It preserves identities: for every affine symplectic groupoid , the affinization of the identity ⟵⟶ in is isomorphic to the identity of in .
Part <ref> follows from the last part of Theorem <ref>.
For Part <ref>, let M_i and N_i be the objects of _i and Ł_i, respectively.
Let μ_ij : N_i ⟶ M_j be the base maps, and let μ_1 = (μ_11, μ_12), μ_2 = (μ_22, μ_23).
We have a diagram of Lie groupoid morphisms
[column sep=4em,between origins, row sep=4em,between origins]
Ł_2 ∘Ł_1 [equal]d
Ł_1 μ_12_2 μ_22Ł_2 dldr
Ł_1 dldr Ł_2 dldr
_1 _2 _3
over the objects
[column sep=4em,between origins, row sep=4em,between origins]
N_1 μ_12_2 μ_22 N_2 [swap]dl_N_1dr_N_2
N_1 [swap]dlμ_11drμ_12 N_2 [swap]dlμ_22drμ_23
M_1 M_2 M_3.
Consider the map
Ł_2∘Ł_1[equal]rd ((_1 μ_11 N_1 μ_12_2) Ł_1 ∘__2∘__2 (_2 μ_22 N_2 μ_23_3) Ł_2 ) _2
(Ł_2 ∘Ł_1)[equal]r (_1 μ_11∘_N_1 (N_1 μ_12_2 μ_22 N_2) μ_23∘_N_2_3) (Ł_1 μ_12∘_2 μ_22∘Ł_2)
which descends from the map
(_1 μ_11 N_1 μ_12_2) ∘__2∘__2 (_2 μ_22 N_2 μ_23_3) d (g_1, n_1, g_2, g_2', n_2, g_3) [mapsto]d
_1 μ_11∘_N_1 (N_1 μ_12_2 μ_22 N_2) μ_23∘_N_2_3 (g_1, n_1, (g_2g_2')^-1, n_2, g_3).
Note that this map descends from the morphism of action groupoids
(Ł_1 ×_2 ף_2) ⋉( (_1 μ_11 N_1 μ_12_2) ∘__2∘__2 (_2 μ_22 N_2 μ_23_3) ) d
(Ł_1 μ_12∘_2 μ_22∘Ł_2)
⋉(_1 μ_11∘_N_1 (N_1 μ_12_2 μ_22 N_2) μ_23∘_N_2_3)
given by
((l_1, a, l_2), (g_1, n_1, g_2, g_2', n_2, g_3)) [mapsto]d where (l_1) = n_1, (l_2) = n_2, and (a) = (g_2) = (g_2')
((l_1, (g_2 g_2')^-1, l_2), (g_1, n_1, (g_2 g_2')^-1, n_2, g_3)) where (l_1) = n_1 and (l_2) = n_2.
One sees that this morphism of action groupoids is a Morita morphism, i.e. a pullback by a surjective smooth morphism. Moreover, the multiplicativity of the symplectic form on _2 implies that the base map preserves the 0-shifted symplectic 2-forms on both sides.
Proposition <ref> then implies that this morphism descends to an isomorphism of Poisson schemes Ł_2∘Ł_1→ (Ł_2 ∘Ł_1).
It is also clearly _1 ×_3-equivariant, and hence an isomorphism of Hamiltonian _1 ×_3^--schemes.
We now show Part <ref>. Note that the affinization of the identity of in is the affine quotient (), where acts by g · (a, b) = (ag^-1, gb); its Poisson structure is induced by the 0-shifted symplectic form on ⋉ (), i.e. the 2-form _1^*ω + _2^*ω on .
At the same time, we have a Morita morphism from ⋉ () to the trivial groupoid ; it is given by (g, (a, b)) ab on arrows and : → on objects.
By the multiplicativity of ω, this is a Morita morphism of 0-shifted symplectic affine algebraic Lie groupoids.
Proposition <ref> then shows that () is isomorphic to as a Hamiltonian ×^--scheme.
§.§ Hartogs abelianizations of affine symplectic groupoids
The following definition is useful.
A Hartogs abelianization of an affine symplectic groupoid X is an abelian affine symplectic groupoid (, ω_) Y, together with a Hartogs symplectic Morita equivalence from to . We call Hartogs abelianizable if it admits a Hartogs abelianization.
The following is a more explicit statement of the definition. Let(, ω_) Xbe an affine symplectic groupoid. A Hartogs abelianization ofconsists of an abelian affine symplectic groupoid(, ω_) Y, an affine Lie groupoid$̋ endowed with an algebraic closed 2-form γ on , and Hartogs Morita morphisms
[row sep=2em,between origins, column sep=3em,between origins]
[swap]dlφdrψ
satisfying φ^*ω_ - ψ^*ω_ = ^*γ - ^*γ.
As in Subsection <ref>, _η, _μ, _δ, and _ϵ transfer to affine 1-shifted Lagrangian correspondences
[row sep=2em,between origins,column sep=3em,between origins]
_ηdldr
⋆ [row sep=2em,between origins,column sep=3em,between origins]
_μdldr
^2 [row sep=2em,between origins,column sep=3em,between origins]
_δdldr
^2
[row sep=2em,between origins,column sep=3em,between origins]
_ϵdldr
⋆
by taking the homotopy fibre products of _η, _μ, _δ, and _ϵ with ^̋-, ×̋×̋^̋-, ×̋^̋- ×^̋-, and $̋, respectively.
Consider the corresponding morphisms_η,_μ,_δ, and_ϵinobtained via affinization.
If an affine symplectic groupoid is Hartogs abelianizable, then (,_η,_μ,_δ,_ϵ) is a commutative Frobenius object in . In this way, determines a TQFT _2⟶.
Let U X be the image of the base map of the Hartogs Morita morphism ψ : ⟶̋. By assumption, U is an open subset with a complement of codimension at least two in X. We also know that the correspondences (<ref>) restrict to 1-shifted Lagrangian correspondences on powers of |_U^-1(U)∩^-1(U).
Proposition <ref> and Lemma <ref>, <ref>, and <ref> now tell us that |_U is a commutative Frobenius object in with respect to these restricted morphisms.
The identity _|_U in can be identified with $̋, and the braidingB_|_Uwith×̋$̋.
Let _ι$̋, viewed as a 1-shifted Lagrangian correspondence fromto.
We similarly let_τ×̋$̋, viewed as a 1-shifted Lagrangian correspondence from × to × via (ψ(a), ψ(b)) (a, b) ↦ (ψ(b), ψ(a)).
It follows that _η, _μ, _ι, _δ, _ϵ, _τ satisfy the identities indicated by (<ref>)–(<ref>) viewed as 1-shifted Lagrangian correspondences on powers of .
Proposition <ref> then implies that their affinizations satisfy the same relations in .
It therefore suffices to show that _ι is the identity morphism from to in and, similarly, that _τ is the brading on in .
This follows from Proposition <ref><ref> and the last part of Theorem <ref>, as we have a Hartogs weak equivalence
[row sep=2em,between origins,column sep=3em,between origins]
dldrdd
ulur
of 1-shifted Lagrangian correspondences from to .
§.§ Hartogs slices
In analogy with Subsection <ref>, one might expect certain slices to induce Hartogs abelianizations of affine symplectic groupoids. This turns out to be true via the following algebro-geometric counterpart to Definition <ref>.
A Hartogs slice to an affine algebraic Lie groupoid X is a smooth closed affine subvariety S X that is a global slice to |_U for some open subset U X whose complement has codimension at least 2.
The Hartogs slice is admissible if the isotropy group _x is abelian for all x ∈ S.
Let X be an affine symplectic groupoid together with an admissible Hartogs slice S X.
Then the restriction |_S is a Hartogs abelianization of .
The proof is essentially the same as that of Proposition <ref>.
Consider a Hartogs sliceS Xto an affine symplectic groupoid X. Note thatSis a Poisson transversal inX. Since the source map : ⟶ Xis anti-Poisson,^-1(S) is a symplectic subvariety of. The action ofon^-1(S)by left multiplication is Hamiltonian. We may thereby view^-1(S)as a morphism→⋆in.
Let X be an affine symplectic groupoid with an admissible Hartogs slice S X.
Write F_ : _2 ⟶ for the TQFT induced from Theorem <ref> by the Hartogs abelianization |_S.
Then F_ maps [
baseline=-2.5pt,
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius= 2pt,
tqft/boundary separation=0.6cm,
tqft/view from=incoming,
]
[
tqft/cup,
name=d,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
cobordism height= 1cm,
]; to the Hamiltonian -scheme ^-1(S).
By the proof of Proposition <ref>, F_([
baseline=-2.5pt,
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius= 2pt,
tqft/boundary separation=0.6cm,
tqft/view from=incoming,
]
[
tqft/cup,
name=d,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
cobordism height= 1cm,
];
) is the affinization of the 1-shifted Lagrangian (S S) ( X).
It then follows directly from (<ref>) that this affinization is the Hamiltonian -scheme ^-1(S).
Consider a pair of integers (m, n) (0, 0).
One can describe the images of the genus-0 cobordism from m circles to n circles under the TQFT F_ of Theorem <ref>. To this end, let |_S.
As in Remark <ref>, the corresponding morphism from ^m to ^n in is ^m, n{(a, b) ∈^*m * ^*n : a_1 ⋯ a_m = b_1 ⋯ b_n}, together with the projections to ^m and ^n.
Since the inclusion ⟶|_U is an essential equivalence, the corresponding 1-shifted Lagrangian correspondences on powers of are also given by ^m, n and the natural maps
[row sep=2em,between origins,column sep=4em,between origins]
^m, ndldr
^m ^n.
Taking the affinizations of these morphisms, we get that F_ maps the genus-0 cobordism from m circles to n circles to the affine quotient
{(g, h) ∈^m ×^n : (g_1) = ⋯ = (g_m) = (h_1) = ⋯ = (h_n) ∈ S}^m, n,
where ^m, n acts by (a, b) · (g, h) = (ga^-1, bh).
The action of ^m × (^-)^n giving it the structure of a Hamiltonian scheme descends from the action (a, b) · (g, h) = (ag, hb^-1).
§ THE SPECIAL CASE OF THE MOORE–TACHIKAWA CONJECTURE
This section is devoted to the implications of Main Theorem <ref> for constructing the Moore–Tachikawa TQFT. As with the previous section, we work exclusively over. We begin by recalling Moore and Tachikawa's categoryof holomorphic symplectic varieties with Hamiltonian actions. We then state the Moore–Tachikawa conjecture regarding the existence of a TQFT in. This conjecture turns out to have a natural cousin in, i.e. a conjecture about the existence of a TQFT in. We deduce the latter conjecture as an immediate corollary of Main Theorem <ref>.
§.§ The Moore–Tachikawa category
Moore and Tachikawa's conjectural TQFT would take values in the so-called category of holomorphic symplectic varieties with Hamiltonian actions<cit.>. We denote this category by, and briefly recall its construction.
The objects ofare complex semisimple affine algebraic groups. A precise description of morphisms inhinges on the exact meaning of holomorphic symplectic variety. In this context, one means an affine Poisson variety for which the Poisson structure is non-degenerate on an open dense subset of the smooth locus. A morphism infromGtoIis an isomorphism class of holomorphic symplectic varieties carrying algebraicG× I-actions, in such a way that the actions ofG=G×{e} G× I(resp.I={e}× I G× I) are Poisson (resp. anti-Poisson). To define morphism composition, suppose that[M]∈(G,I)and[N]∈(I,K)for complex semisimple affine algebraic groupsG,I, andK. One defines[N]∘[M][(M× N^-)0I]∈(G,K).By settingG⊗ I G× Iand[M]⊗[N]=[M× N], one can realizeas a symmetric monoidal category.
We define a functorℱ:⟶as follows. Given a complex semisimple affine algebraic groupG, let(G)be the cotangent groupoidT^*G^*. Now suppose that[M]∈(G,I)for complex semisimple affine algebraic groupsGandI. It follows thatMis an affine HamiltonianT^*G× (T^*I)^--scheme. This fact allows us to letℱ([M])=[M], where the right-hand side is the isomorphism class ofMas an affine HamiltonianT^*G× (T^*I)^--scheme. It follows thatℱ:⟶includesas a symmetric monoidal subcategory of.
§.§ The Moore–Tachikawa conjecture
LetGbe a connected complex semisimple linear algebraic group with Lie algebraand rankℓ. The Killing form determines an isomorphism between the adjoint and coadjoint representations ofG; we denoted it by(·)^∨:⟶^*,x↦ x^∨. WriteG_x GandG_ξ Gfor theG-centralizers ofx∈andξ∈^*, respectively. Their respective Lie algebras are denoted_xand_ξ. It is known that_x≥ℓ(resp._ξ≥ℓ) for allx∈(resp.ξ∈^*). The regular loci inand^*are theG-invariant open subvarieties given by_reg{x∈:_x=ℓ} and ^*_reg{ξ∈^*:_ξ=ℓ},respectively. Using theG-equivariance of(·)^∨:⟶^*, one deduces that^*_reg=(_reg)^∨.
LetG× Gact onGvia(g_1,g_2)· h=g_1hg_2^-1. The cotangent lift of this action is a HamiltonianG× G-variety structure onT^*G. If we use the left trivialization to identifyT^*GwithG×^*, then this lifted action admits(μ_1,μ_2):T^*G⟶^*⊕^*, (g,ξ)↦(-Ad_g^*(ξ),ξ)as a moment map.
Let(e,h,f)∈^× 3be an𝔰𝔩_2-triple withe,h,f∈_reg, i.e. a principal𝔰𝔩_2-triple. This triple determines a Slodowy slice𝒮 e+_f. One knows that𝒮^∨= (e+_f)^∨^*is a Poisson transversal in^*<cit.>. It follows thatG×𝒮^∨=μ_2^-1(S^∨) T^*Gis a symplectic subvariety. We also observe that the Hamiltonian action ofG=G×{e} G× GonT^*GpreservesG×𝒮^∨. In this way,G×𝒮^∨is a holomorphic symplectic variety with a Hamiltonian action ofG. The Moore–Tachikawa conjecture is then stated as follows.
Let G be a connected semisimple affine algebraic group with Lie algebra . Consider a principal 𝔰𝔩_2-triple (e,h,f)∈^× 3, and set 𝒮 e+_f. There exists a TQFT η_G:_2⟶ satisfying η_G(S^1)=G and η_G([
baseline=-2.5pt,
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius= 2pt,
tqft/boundary separation=0.6cm,
tqft/view from=incoming,
]
[
tqft/cup,
name=d,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
cobordism height= 1cm,
];
)=G×𝒮^∨
We discuss the status of this conjecture in the next subsection.
§.§ The Moore–Tachikawa conjecture in
LetGbe a connected complex semisimple affine algebraic group with Lie algebra. On the other hand, recall the functorℱ:⟶. Composing the conjectural TQFTη_G:_2⟶in Conjecture <ref> withℱyields the following conjecture.
Let G be a connected semisimple affine algebraic group with Lie algebra . Consider a principal 𝔰𝔩_2-triple (e,h,f)∈^× 3, and set 𝒮 e+_f. There exists a TQFT η_T^*G:_2⟶ satisfying η_T^*G(S^1)=(T^*G^*) and η_T^*G([
baseline=-2.5pt,
every tqft/.append style=
transform shape, rotate=90, tqft/circle x radius=4pt,
tqft/circle y radius= 2pt,
tqft/boundary separation=0.6cm,
tqft/view from=incoming,
]
[
tqft/cup,
name=d,
every incoming lower boundary component/.style=draw,
every outgoing lower boundary component/.style=draw,
every incoming boundary component/.style=draw,
every outgoing boundary component/.style=draw,
cobordism edge/.style=draw,
cobordism height= 1cm,
];
)=G×𝒮^∨.
With some effort, unpublished results of Ginzburg–Kazhdan <cit.> can be understood as implying Conjecture <ref>. Conjecture <ref> then reduces to whetherη_T^*G:_2⟶takes values in the subcategory, i.e. whether certain algebras are finitely generated. In Lie typeA, results of Braverman–Finkelberg–Nakajima <cit.> imply that the relevant algebras are indeed finitely generated.
§.§ Proof of Moore–Tachikawa conjecture in
We now show Conjecture <ref> to be a straightforward consequence of our shifted symplecto-geometric approach. Retain the notation and objects used in the previous subsection.
The subvariety 𝒮^∨^* is an admissible Hartogs slice to T^*G^*.
Write G· Y and G· Z^* for the G-saturations of Y and Z^*, respectively. Our task is to verify the following:
G·𝒮^∨ is open and has a complement of codimension at least two in ^*;
the restriction of G· S^∨⟶ (G· S^∨)/G to 𝒮^∨ is a bijection;
^*=T_ξ(𝒮^∨)⊕ T_ξ(G·ξ) for all ξ∈𝒮^∨;
G_ξ is abelian for all ξ∈𝒮^∨.
At the same time, recall that the isomorphism (·)^∨:⟶^* is G-equivariant. Parts (i), (ii), and (iii) may therefore be rephrased as follows:
G·𝒮 is open and has a complement of codimension at least 2 in ;
the restriction of G· S⟶ (G·𝒮)/G to 𝒮 is a bijection;
=T_x𝒮⊕ T_x(G· x) for all x∈𝒮;
G_x is abelian for all x∈𝒮.
It is known that G·𝒮=, and that ∖ has codimension three in <cit.>. Parts (ii), (iii), and (iv) are then immediate and well-known implications of Kostant's work <cit.>.
Conjecture <ref> is true.
Recall that the target morphism :T^*G⟶^* is given by (g,ξ)=ξ. It follows that ^-1(S^∨)=G×𝒮^∨. Theorem <ref> and Lemma <ref> then imply the desired result.
There is a multiplicative counterpart to Theorem <ref>. To obtain it, one replaces T^*G and 𝒮^∨^* with the quasi-Hamiltonian G-variety D(G) G× G and a Steinberg slice in G, respectively. This perspective is explored in the second named author's joint work with Bălibanu <cit.>.
§ EXAMPLES INVOLVING SLODOWY SLICES
As with the previous two sections, we work exclusively overℂ. It is tempting to wonder if Main Theorem <ref> gives examples of TQFTs that are fundamentally different from those conjectured by Moore–Tachikawa. Perhaps reassuringly, such examples exist. The current section is devoted to examples of this sort that involve Slodowy slices. We begin by addressing some Poisson-geometric properties of Slodowy slices. Particular attention is paid to the degeneracy locus of a Slodowy slice, i.e. the locus on which the Poisson bivector has non-maximal rank. The discussion subsequently turns to specific Slodowy slices. We restrictT^*_n𝔰𝔩_n^*to a Slodowy slice to the minimal nilpotent orbit in𝔰𝔩_n^*. This restriction is shown to be Hartogs abelianizable. By Main Theorem <ref>, it determines a TQFT.
§.§ Some general comments
Recall that the rank of a smooth Poisson varietyXis the supremum of the dimensions of its symplectic leaves. WriterkXfor this quantity, andX_reg⊂ Xfor the union of therkX-dimensional symplectic leaves ofX. It turns out thatX∖ X_regis the vanishing locus ofπ^1/2rkX, whereπis the Poisson bivector field. This implies thatX_regis an open subvariety ofX. Let us call a symplectic leaf ofXregular if it is contained inX_reg.
§.§ Slodowy slices
LetGbe a connected semisimple affine algebraic group with Lie algebraand rankℓ. Suppose that(e,h,f)∈^× 3is an𝔰𝔩_2-triple with Slodowy slice𝒮 e+_f⊂. Recall the definitions ofandfrom Subsection <ref>. Note that the definition ofcoincides with the Poisson-theoretic one, obtained by settingX=^*in Subsection <ref>.
Let us call an adjoint (resp. coadjoint) orbit ofGregular if it is contained in(resp.).
If 𝒪^* is a regular coadjoint orbit, then 𝒪∩𝒮^∨≠∅.
Consider the adjoint quotient π:^*⟶Spec(ℂ[^*]^G)𝔠. The closure 𝒪⊂^* is a fiber of π<cit.>. On the other hand, the restriction π|_𝒮^∨:𝒮^∨⟶𝔠 is known to be faithfully flat with irreducible fibers of dimension 𝒮-ℓ<cit.>. It follows that 𝒮^∨∩𝒪 is a non-empty, (𝒮-ℓ)-dimensional fiber of π|_𝒮^∨. We also know that 𝒪∖𝒪 is a union of finitely many coadjoint orbits <cit.>, all of dimension strictly less than 𝒪. The intersection of each such orbit with 𝒮^∨ must therefore have dimension strictly less than 𝒪+𝒮-=𝒮-ℓ. Since 𝒮^∨∩𝒪 is non-empty and (𝒮-ℓ)-dimensional, we must have 𝒮^∨∩𝒪≠∅.
Recall theG-module isomorphism(·)^∨:⟶^*discussed in Subsection <ref>. The subvariety𝒮^∨=(e+_f)^∨is a Poisson transversal in^*<cit.>. As such,𝒮^∨is a Poisson variety. We may therefore consider the numberrank𝒮^∨and locus(𝒮^∨)_reg𝒮^∨.
We have rank𝒮^∨=𝒮-ℓ and (𝒮^∨)_reg=𝒮^∨∩^*_reg.
Suppose that ξ∈𝒮^∨. Write L_ξ⊂𝒮^∨ for the symplectic leaf of 𝒮^∨ containing ξ. It follows that T_ξL_ξ= (G·ξ)+𝒮-. This implies that T_ξL_ξ≥ T_ηL_η for all η∈𝒮^∨ if and only if (G·ξ)≥(G·η) for all η∈𝒮^∨. A rephrased version is that x∈(𝒮^∨)_reg if and only if _ξ≤_η for all η∈𝒮^∨. Lemma <ref> tells us that the latter condition holds if and only if ξ∈. We conclude that (𝒮^∨)_reg=𝒮^∨∩^*_reg.
Now suppose that ξ∈ (𝒮^∨)_reg=𝒮^∨∩^*_reg. We have T_ξL_ξ= (G·ξ)+𝒮-=(-ℓ)+𝒮-ℓ=𝒮-ℓ. It follows that rank𝒮^∨=𝒮-ℓ.
The following statements are true.
The regular symplectic leaves of 𝒮^∨ are the intersections of 𝒮^∨ with the regular coadjoint orbits.
The codimension of 𝒮^∨∖(𝒮^∨)_reg in 𝒮^∨ is at least three.
We begin by proving (i). Consider the adjoint quotient π:^*⟶Spec(ℂ[^*]^G)𝔠. The restriction π|_𝒮^∨:𝒮^∨⟶𝔠 is known to be faithfully flat with irreducible fibers of dimension 𝒮-ℓ<cit.>. Since each fiber of π is a finite union of coadjoint orbits <cit.>, each fiber of π|_S^∨ must be a finite union of symplectic leaves of 𝒮^∨. It also follows that each fiber of π|_S^∨ is the closure of a (𝒮-ℓ)-dimensional symplectic leaf of 𝒮^∨.
Suppose that α∈𝔠. There exists a unique regular coadjoint orbit 𝒪_α⊂ with the property that 𝒪_α=π^-1(α)<cit.>. Each irreducible component of 𝒮^∨∩𝒪_α is a (𝒮-ℓ)-dimensional symplectic leaf of 𝒮^∨ contained in 𝒮^∨∩π^-1(α). Since 𝒮^∨∩π^-1(α) is the closure of a (𝒮-ℓ)-dimensional symplectic leaf of 𝒮^∨, this leaf must be 𝒮^∨∩𝒪_α. We also know that α↦𝒪_α defines a bijection from 𝔠 to the set of regular coadjoint orbits <cit.>. It follows that each regular coadjoint orbit intersects 𝒮^∨ in a symplectic leaf. It is also clear that the regular symplectic leaves of 𝒮^∨ are the irreducible components of the intersections of 𝒮^∨ with the regular coadjoint orbits. These last two sentences combine to imply (i).
We now verify (ii). Recall that a coadjoint orbit 𝒪^* is called semisimple if it corresponds to a semisimple adjoint orbit under (·)^∨:⟶^*. Consider the locus 𝔠^∘{α∈𝔠:𝒪_α is semisimple} and its complement 𝔡𝔠∖𝔠^∘. One knows that 𝒪_α is closed for all α∈𝔠^∘. Note also that (𝒮^∨∖(𝒮^∨)_reg)∩π^-1(α)=𝒮^∨∩(𝒪_α∖𝒪_α) for all α∈𝔠, as follows from the second paragraph of this proof. We conclude that
𝒮^∨∖(𝒮^∨)_reg =⋃_α∈𝔠((𝒮^∨∖(𝒮^∨)_reg)∩π^-1(α))
=⋃_α∈𝔠(𝒮^∨∩(𝒪_α∖𝒪_α))
=⋃_α∈𝔡(𝒮^∨∩(𝒪_α∖𝒪_α))
=⋃_α∈𝔡((𝒮^∨∖(𝒮^∨)_reg)∩π^-1(α)).
Let us also observe that (𝒮^∨∖(𝒮^∨)_reg)∩π^-1(α) has codimension at least 2 in 𝒮^∨∩π^-1(α) for all α∈𝔠; this is implied by the first paragraph of the proof. The desired result now follows from 𝔡 having codimension 1 in 𝔠.
§.§ Slodowy slices to the minimal nilpotent orbit in 𝔰𝔩_n
Let us specialize to the case=𝔰𝔩_n. Consider the𝔰𝔩_2-triple(e,h,f)∈𝔰𝔩_n^× 3given bye=[ 0 1 0 ⋯ 0; 0 0 0 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ 0 ], h=[ 1 0 0 ⋯ 0; 0 -1 0 ⋯ 0; 0 0 0 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ 0 ], and f=[ 0 0 ⋯ 0; 1 0 ⋯ 0; 0 0 ⋯ 0; ⋮ ⋮ ⋱ ⋮; 0 0 ⋯ 0 ].Let us also consider the closed subvariety𝒯{[ 0 1 0 0 ⋯ 0; a_n-2 0 1 0 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; a_2 0 0 0 ⋱ 0; a_1 0 0 0 ⋯ 1; a_0 0 0 0 ⋯ 0 ]:a_0,…,a_n-2∈ℂ}of𝔰𝔩_n; it consists of the transposes of the trace-freen× ncompanion matrices. A straightforward exercise reveals that𝒯⊂𝒮 e+(𝔰𝔩_n)_f. It follows that𝒯^∨𝒮^∨.
The subvariety 𝒯^∨ is a Hartogs slice to 𝒮^∨.
One may use <cit.> to conclude that 𝒯⊂(𝔰𝔩_n)_reg. In other words, 𝒯𝒮∩(𝔰𝔩_n)_reg. It follows that 𝒯^∨𝒮^∨∩(𝔰𝔩_n^*)_reg=(𝒮^∨)_reg, where the last instance of equality comes from Lemma <ref>.
It remains only to prove that each regular symplectic leaf L⊂𝒮^∨ intersects 𝒯^∨ in a single point. Proposition <ref>(i) makes this the task of proving that each regular coadjoint orbit intersects 𝒯^∨ in a single point. This is equivalent to each regular adjoint orbit intersecting 𝒯 in a single point. In other words, it suffices to prove that 𝒯 is a section of the adjoint quotient π:𝔰𝔩_n⟶Spec(ℂ[𝔰𝔩_n]^SL_n).
Write (tI_n-x)=t^n+f_n-2(x)t^n-2+⋯+f_1(x)t+f_0(x) for the characteristic polynomial of x∈𝔰𝔩_n. The polynomials f_0,…,f_n-2 are algebraically independent generators of ℂ[𝔰𝔩_n]^SL_n. The adjoint quotient of 𝔰𝔩_n is thereby the map f=(f_0,…,f_n-2):𝔰𝔩_n⟶ℂ^n-1. It is also straightforward to check that f([ 0 1 0 0 ⋯ 0; a_n-2 0 1 0 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; a_2 0 0 0 ⋱ 0; a_1 0 0 0 ⋯ 1; a_0 0 0 0 ⋯ 0 ])=(-a_0,…,-a_n-2) for all a_0,…,a_n-2∈ℂ. This makes it clear that the restriction of the adjoint quotient π:𝔰𝔩_n⟶𝔠 to 𝒯 is an isomorphism of varieties. The proof is therefore complete.
One symplectic groupoid integrating𝒮^∨is the restriction𝒮^∨ofT^*_n𝔰𝔩_n^*to𝒮^∨. It is also clear that𝒯^∨⊂(𝔰𝔩_n)^*_reg<cit.>, so that(_n)_ξis abelian for allξ∈𝒯^∨. These considerations combine with Proposition <ref> to imply that𝒯^∨is an admissible Hartogs slice to𝒮^∨. In light of Theorem <ref> and Proposition <ref>,𝒮^∨determines a TQFT.
§ EXAMPLES ARISING FROM NON-REDUCTIVE GROUPS
We continue to work exclusively overℂ. In this section, we begin by defining the notion of a Moore–Tachikawa groupG. The definition provides sufficient conditions forT^*G^*to be Hartogs abelianizable, and includes all reductive groups as examples. In this way, Moore–Tachikawa groups give rise to TQFTs. In order to obtain TQFTs beyond those conjectured by Moore–Tachikawa, we must give examples of non-reductive Moore–Tachikawa groups. Most of this section is devoted to the construction of such groups.
§.§ Moore–Tachikawa groups
LetGbe an affine algebraic group with Lie algebra. Givenξ∈^*, letG_ξ Gand_ξdenote the centralizers ofξunder the coadjoint representations ofGand, respectively. We define^*_reg{ξ∈^*:_ξ≤_η for all η∈^*}.A Moore–Tachikawa group is an affine algebraic group G with the following properties:
^* has a complement of codimension at least two in ^*;
G_ξ is abelian for all ξ∈^*;
the pullback of the cotangent groupoid T^*G^* to ^* is abelianizable, i.e. Morita equivalent to an abelian symplectic groupoid.
A sufficient condition for (ii) and (iii) to hold is the existence of a smooth, closed subvarietyS^*with the following properties:
* S;
* S intersects every coadjoint orbit in transversely in a singleton;
* G_ξ is abelian for all ξ∈ S.
One shows this condition to be sufficient in a manner analogous to the proof of Proposition <ref>.
Definition <ref> provides sufficient conditions forT^*G^*to be Hartogs abelianizable. By combining this observation with Theorem <ref>, one concludes that every Moore–Tachikawa group induces a TQFT in. On the other hand, results of Kostant <cit.> imply that every reductive group is Moore–Tachikawa. The TQFTs induced by reductive groups are essentially those conjectured by Moore–Tachikawa. This motivates us to find examples of non-reductive Moore–Tachikawa groups. We devote the next two subsections to this task.
§.§ The semidirect product of _2 and its standard representation
LetGbe an affine algebraic group andρ : G ⟶GL(V)a finite-dimensional, algebraic representation.
Consider the group semidirect productH G ⋉_ρ V, i.e. we considerVas a group with addition, so that multiplication inHis given by
(g_1, v_1) · (g_2, v_2) = (g_1 g_2, v_1 + ρ_g_1(v_2)).
The Lie algebra ofHis then⋉_ρ V, with bracket
[(x_1, v_1), (x_2, v_2)] = ([x_1, x_2], ρ_x_1 v_2 - ρ_x_2(v_1)).
If ρ is the standard representation of _2, then the non-reductive group H _2⋉_ρ^2 is Moore–Tachikawa. In particular, H induces a TQFT _2.
Let V = ^2 be the standard representation of _2.
We identify V^* with V = ^2 in an (2, )-equivariant way via the standard symplectic form, i.e.
^2 ≅ V^*, (η_1, η_2) ω((η_1, η_2), ·),
where ω((η_1, η_2), (v_1, v_2)) = η_1 v_2 - η_2 v_1.
We also identify _2^* with _2 via the invariant bilinear form (x, y) (xy).
A straightfoward computation then reveals that the coadjoint representation of on ^* ≅_2 ×^2 is
_(x, u)^*(ξ, η) =
(
[ (x_2 ξ_3 - x_3 ξ_2) - 12(η_1 u_2 + η_2 u_1) 2 (x_1 ξ_2 - x_2 ξ_1) + η_1 u_1; 2 (x_3 ξ_1 - x_1 ξ_3) - η_2 u_2 (x_3 ξ_2 - x_2 ξ_3) + 12(η_1 u_2 + η_2 u_1) ],
[ x_1 η_1 + x_2 η_2; x_3 η_1 - x_1 η_2 ]),
where x = ([ x_1 x_2; x_3 -x_1 ]), u = ([ u_1; u_2 ]), ξ = ([ ξ_1 ξ_2; ξ_3 -ξ_1 ]), and η = ([ η_1; η_2 ]). If η 0, then _(x, u)^*(ξ, η) = 0 if and only if there exists a constant c ∈ such that
x = c[ η_1 η_2 -η_1^2; η_2^2 -η_1 η_2 ] and
u =
-2c[ η_1 ξ_1 + η_2 ξ_2; η_1 ξ_3 - η_2 ξ_1 ].
It follows that the centralizer _(ξ,η) is 1-dimensional in this case. If η = 0, then _(0, u)^*(ξ, η) = 0 for all u∈^2. The centralizer _(ξ,η) must therefore have dimension at least two in this case. We conclude that
^* = {(ξ, η) ∈(2, ) ×^2 : η 0}.
This locus clearly has a complement of codimension two in ^*.
It remains to verify (ii) and (iii) in Definition <ref>. We accomplish this by verifying the sufficient condition mentioned immediately after that definition. To this end, let S^* be the image of
σ : ^*,
z (
[ 0 0; z 0 ]
,
[ 1; 0 ]).
Suppose that (ξ, η) ∈^*.
Since η 0, there exists g ∈_2 such that ρ_g^*(η) = (1, 0).
We may therefore assume that η = (1, 0).
Note that the stabilizer of (1, 0) is the set of g ∈_2 of the form
g = [ 1 a; 0 1 ], a∈ℂ.
For such g and η = (1, 0), we have
_(g, u)^*(ξ, η) =
(
[ ξ_1 + a ξ_3 - u_22 -2a ξ_1 + ξ_2 - a^2 ξ_3 + u_1; ξ_3 - ξ_1 - a ξ_3 + u_22 ],
[ 1; 0 ]).
It follows that there is a unique u = (u_1, u_2) such that _(g, u)^*(ξ, η) is S.
In other words, S intersects every regular coadjoint orbit exactly once.
Moreover, the intersection is transverse.
Equation (<ref>) also shows the following: for all z ∈, the stabilizer of σ(z) ∈^* is the set of elements of H of the form (([ 1 a; 0 1 ]),([ a^2z; 2az ])), a ∈; this is abelian.
Our proof is therefore complete
§.§ The centralizer of a minimal nilpotent element in _3
One may weaken Definition <ref> by replacingwith an arbitraryG-invariant open subsetU^*. This open subset would be required to satisfy the following properties:
U has a complement of codimension at least two in ^*;
G_ξ is abelian for all ξ∈ U;
the pullback of the cotangent groupoid T^*G^* to U is abelianizable, i.e. Morita equivalent to an abelian symplectic groupoid.
A sufficient condition for (ii) and (iii) to hold would be the existence of a smooth, closed subvarietyS^*with the following properties:
* S U;
* S intersects every coadjoint orbit in U transversely in a singleton;
* G_ξ is abelian for all ξ∈ S.
Suppose that an affine algebraic groupGadmits aG-invariant open subsetU^*satisfying (i)–(iii). The cotangent groupoidT^*G^*is then clearly Hartogs abelianizable. As such, it determines a TQFT_2⟶.
The term Moore–Tachikawa group could have been reserved for this generalization of Definition <ref>, i.e. for an affine algebraic group G admitting a G-invariant open subset U^* satisfying (i)–(iii) above. One of the reasons for preserving Definition <ref> is the bridge it provides to pure Lie theory. A Lie theorist could find meaningful examples satisfying Definition <ref> without reading our manuscript in detail.
If G _3 is the stabilizer of
e [ 0 0 1; 0 0 0; 0 0 0 ]∈(3, ),
then G satisfies Conditions (i)–(iii) above. It thereby determines a TQFT _2⟶.
The group G can be written explicitly as
G = {[ r a c; 0 r^-2 b; 0 0 r ] : (r, a, b, c) ∈^××^3},
with Lie algebra
= {[ t x z; 0 -2t y; 0 0 t ] : (t, x, y, z) ∈^4 }.
Identify ^* with ^4 via the coordinates dual to those in (<ref>).
For g = (r, a, b, c) ∈ G and (s, u, v, w) ∈^*, we have
_g^*(s, u, v, w) =
(
s + 3 (a/r u - b r^2 v + abr w),
u/r^3 + wb/r,
v r^3 - war^2,
w
).
One then sees that we have two transverse slices, given by
σ_1 : × ^*, (v, w) (0, 1, v, w)
σ_2 : × ^*, (u, w) (0, u, 1, w).
They are slices for the set of regular elements with (u, w) (0, 0) and (v, w) (0, 0), respectively, which both have complements of codimension two.
If w 0, the stabilizer of σ_1(v, w) = (0, 1, v, w) ∈^* is the set of g = (r, a, b, c) such that a = v/w(r - 1/r^2) and b = 1/w(r - 1/r^2), which is abelian.
If w = 0, the stabilizer is the set of g = (r, a, b, c) such that r^3 = 1 and a = bv, which is also abelian.
It follows that σ_1 is an admissible Hartogs slice.
A similar argument shows that σ_2 has the same property.
More generally, let be a simple Lie algebra of type A or C. Consider the centralizers G_e G and _e of a nilpotent element e∈ under the adjoint representations of G and , respectively. By <cit.>, (_e)^*_reg has a complement of codimension at least two in (_e)^*. It is therefore reasonable to expect G_e to satisfy Conditions (i)–(iii) from the beginning of this subsection, and for a more general version of Proposition <ref> to hold.
acm |
http://arxiv.org/abs/2409.03259v1 | 20240905053621 | Transmit Beamforming Design for ISAC with Stacked Intelligent Metasurfaces | [
"Shunyu Li",
"Fan Zhang",
"Tianqi Mao",
"Rui Na",
"Zhaocheng Wang",
"George K. Karagiannidis"
] | eess.SP | [
"eess.SP"
] |
Transmit Beamforming Design for ISAC with Stacked Intelligent Metasurfaces
Shunyu Li, Student Member, IEEE,
Fan Zhang, Student Member, IEEE,
Tianqi Mao, Member, IEEE, Rui Na, Zhaocheng Wang, Fellow, IEEE, and George K. Karagiannidis, Fellow, IEEE
This work was supported by National Natural Science Foundation of China under Grant No. 62088101. (Corresponding authors: Tianqi Mao, Rui Na.)
S. Li, T. Mao and R. Na are with State Key Laboratory of CNS/ATM, Beijing Institute of Technology, Beijing 100081, China. T. Mao is also with Beijing Institute of Technology (Zhuhai), Zhuhai 519088, China. R. Na is also with Yangtze Delta Region Academy of Beijing Institute of Technology (Jiaxing), Jiaxing 314019, China (e-mails: [email protected], [email protected], [email protected]).
F. Zhang and Z. Wang are with Department of Electronic Engineering, Tsinghua University, Beijing 100084, China (e-mails: [email protected], [email protected]).
G. K. Karagiannidis is with Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Greece and also with Artificial Intelligence & Cyber Systems Research Center, Lebanese American University (LAU), Lebanon ([email protected]).
September 9, 2024
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
This paper proposes a transmit beamforming strategy for the integrated sensing and communication (ISAC) systems enabled by the novel stacked intelligent metasurface (SIM) architecture, where the base station (BS) simultaneously performs downlink communication and radar target detection via different beams.
To ensure superior dual-function performance simultaneously, we design the multi-layer cascading beamformer by maximizing the sum rate of the users while optimally shaping the normalized beam pattern for detection.
A dual-normalized differential gradient descent (D^3) algorithm is further proposed to solve the resulting non-convex multi-objective problem (MOP), where gradient differences and dual normalization are employed to ensure a fair trade-off between communication and sensing objectives.
Numerical results demonstrate the superiority of the proposed beamforming design in terms of balancing communication and sensing performance.
Stacked intelligent metasurfaces (SIM), reconfigurable intelligent surface (RIS), integrated sensing and communication (ISAC), beamforming.
§ INTRODUCTION
Integrated Sensing and Communication (ISAC) is considered one of the most promising technologies for next-generation wireless networks <cit.>. Such philosophy aims at realizing the convergence of communication and sensing systems by sharing hardware platforms, spectrum resources, and even waveforms, which can alleviate spectrum congestion and hardware costs <cit.>. Therefore, this integrated implementation can support many emerging applications such as augmented reality, autonomous driving, and the Industrial Internet of Everything (IoE), where communication and sensing devices with compact deployment are highly demanded <cit.>.
To further enhance the communication and sensing capabilities, the ISAC framework tends to incorporate the multi-antenna technology for additional spatial degrees of freedom (DoF) achieved by large-scale antenna array <cit.>.
Despite the fascinating capabilities of digital/hybrid beamforming, classical array-based ISAC systems inevitably suffer from excessive power consumption and hardware cost, resulting from the numerous radio frequency (RF) chains or complex feeding networks with phase shifters and microstrip lines <cit.>.
To solve this problem, the programmable metasurface, also known as reconfigurable intelligent surface (RIS), can be implemented at the transceiver by replacing the classical phased array antenna, where the RF chains and feeding networks are no longer required <cit.>.
In <cit.>, a reconfigurable distributed antenna and reflecting surface aided ISAC system was proposed, combining the distributed gain of the distributed antenna system with the passive beamforming gain of RIS to significantly improve system performance while reducing hardware cost.
Besides, <cit.> developed a RIS-enabled integrated sensing, communication, and computing system where RF chain-free transmissions were realized by using RIS as passive information carriers and modulators.
The aforementioned literature mainly focused on the single-layer metasurface structure, whose ability to control electromagnetic waves may be limited.
To this end, the stacked intelligent metasurface (SIM) has recently been proposed as a promising approach which cascades multiple layers of transmissive metasurfaces. This SIM technology can further enhance the degrees of freedom (DoF) in manipulating the electromagnetic environment, which exhibits desirable communication performance in terms of multi-user beamforming <cit.>.
Despite <cit.> that deployed SIM between the base station and users/targets for ISAC performance enhancement, research on ISAC system design with the transmit SIM architecture is still in its infancy.
In this context, we propose a transmit beamforming design for ISAC systems equipped with SIM at the BS to perform downlink communication and sensing simultaneously.
Specifically, we consider dedicated streams for communication and sensing, respectively. To achieve the desired dual-function performance tradeoff, we optimize the reflection coefficients of SIM elements to form the corresponding sensing beam patterns toward the desired target in the 2D angular domain, while maximizing the total communication data rate.
Furthermore, we propose the Dual-normalized Differential Gradient Descent (D^3) algorithm to tackle this non-convex multi-objective problem (MOP) with coupled variables induced by the cascaded structure of SIM.
The proposed D^3 algorithm uses gradient differences to balance the communication and sensing objectives, while incorporating two levels of normalization. The first normalization equalizes the magnitudes of the communication and sensing gradients, ensuring effective gradient differences, while the second normalization balances the phase shifts between SIM elements for convergence.
Simulation results validate the superiority of our proposed transmit beamforming design and elucidate the impact of SIM parameters on dual-function performance, providing useful guidelines for SIM-enabled ISAC implementations.
§ SYSTEM MODEL FOR SIM-ENABLED ISAC
As shown in Fig. <ref>, we consider a SIM-enabled ISAC system serving N_C single-antenna users and N_S sensing targets.
The transmitter (BS) consists of a SIM planar array illuminated by a uniform linear array (ULA) of N_BS antennas.
For simplicity and to focus on the SIM-enabled beamforming capabilities, we assume uniform power distribution over the ULA at the base station.
Apart from the N_C parallel communication data streams, we assume N_S independent streams for sensing to extend the degrees of freedom (DoF) for the transmit beamforming design, i.e, N_BS= N_C + N_S, where the number of communication and sensing signal streams is equal to the number of users and targets, respectively <cit.>.
For clarity, the SIM is assumed to consist of L layers of metasurfaces of size M = M_r× M_c, where M_r and M_c denote the number of meta-atoms in the rows and columns of each metasurface layer, respectively.
According to Rayleigh-Sommerfeld diffraction theory, which is widely employed in SIM-related literature <cit.>, the inter-layer channel coefficient w_m, m^'^l from the m^'-th meta-atom on the (l-1)-th metasurface layer, i.e., Layer (l-1), to the m-th meta-atom on Layer l, can be expressed by
w_m, m^'^l=A_t cosχ_m, m^'^l/r_m, m^'^l(1/2 π r_m, m^l-j 1/λ) e^j 2 πr_m, m^'^l/λ,
where λ is the wavelength, A_t represents the size of each meta-atom, r_m, m^'^l denotes the transmission distance, while χ_m, m^'^l specifies the angle between the propagation direction and the normal direction of the Layer (l-1).
By applying (<ref>) to each meta-atom, we obtain the inter-layer diffraction matrix, denoted as 𝐖^l ∈ℂ^M × M for l =2,3,⋯,L.
Besides, we have 𝐖^1 ∈ℂ^M × N_BS for l=1, which represents the diffraction matrix from the feeding antenna array to the Layer 1 of the SIM.
Additionally, the diagonal phase shift matrix Φ^l of the Layer l of the SIM can be formulated as
Φ^l=diag(e^j θ_1^l, e^j θ_2^l, ⋯,e^j θ_m^l,⋯, e^j θ_M^l),
where e^j θ_m^l denotes the phase shift applied by the m-th meta-atom on Layer l with θ_m^l ∈[0,2 π), for l =1,2,⋯,L and m =1,2,⋯,M.
Afterwards, the SIM-enabled beamforming matrix 𝐅_SIM∈ℂ^M × N_BS can be expressed as
𝐅_SIM=Φ^L 𝐖^L Φ^L-1𝐖^L-1⋯Φ^1 𝐖^1 .
By adopting the Saleh-Valenzuela channel model for MIMO systems <cit.>, the channel between the SIM and the n-th communication user, i.e., user n, for n=1,2,…,N_C can be characterized as
𝐡_n=√(M/Q_n)∑_q=1^Q_n g_q^(n)α^H( θ_q^(n),φ_q^(n)),
where Q_n is the number of resolvable channel paths, while g_q^(n) represents the channel gain for q=1,2,⋯,Q_n.
Specifically, for the LOS path, the channel gain is distributed as g_q=1^(n)∼𝒞𝒩(0, β), while for NLOS paths, the channel gain is distributed as g_q>1^(n)∼𝒞𝒩(0, 0.01β), for n = 1, …, N_C. Here, β denotes the distance-dependent path loss modeled as β = C_0 d_n^-α, where C_0 is the free space path loss, α is the path loss exponent, and d_n is the distance from the SIM to the user n.
Moreover, θ_q^(n) and φ_q^(n) represent the elevation and azimuth angles of departure (AoD) of the q-th channel path relative to those of the LoS path component.
α( θ, φ) ∈ℂ^M × 1 denotes the channel steering vector, which is a function of elevation and azimuth angles expressed by
α(θ, φ)=[1, ⋯, e^-j 2 π(M_r-1) sin (θ) cos (φ) d_x/λ] ⊗[1, ⋯, e^-j 2 π(M_c-1) sin (θ) sin (φ) d_y/λ],
where ⊗ stands for the Kronecker product, and d_x and d_y represent the horizontal and perpendicular spacings between adjacent meta-atoms, respectively <cit.>.
The transmit signal at the BS before passing through the SIM is denoted as x∈ℂ^N_BS, where E{x}=0 and E{xx^H}=I_N_BS. The first N_C elements of x, i.e., x_1, x_2, ⋯, x_N_C, represent the information symbols for the N_C communication users, while the remaining N_S elements, i.e., x_N_C+1, ⋯, x_N_BS, correspond to the sensing waveforms for the N_S sensing targets.
Hence, the received signals by different users, denoted as y∈ℂ^N_C× 1, can be expressed as
y=𝐇𝐅_SIMx+n,
where 𝐇=[𝐡_1, 𝐡_2, ⋯, 𝐡_N_C]^T ∈ℂ^N_C× M denotes the total channel matrix, and n∈ℂ^N_C× 1 is the additive white Gaussian noise (AWGN) vector with n∼𝒞𝒩(0, σ^2 𝐈_N_C). Here, σ^2 is the noise power at the receivers, and 𝐈_N_C is the N_C-by-N_C identity matrix.
For user n, both the signals from the other users and sensing targets are regarded as interference. Therefore, the signal-to-interference-plus-noise ratio (SINR) of user n can be expressed as
γ_n=|[𝐇 𝐅_SIM]_n, n|^2/∑_i=1, i ≠ n^N_BS|[𝐇 𝐅_SIM]_n, i|^2+σ^2,
where [𝐇 𝐅_SIM]_n, i denotes the element in the n-th row and i-th column of the 𝐇 𝐅_SIM. Then, the sum rate of the N_C users, according to the Shannon-Hartly theorem, can be written as
R_sum = ∑_n=1^N_Clog_2 (1 + γ_n).
In terms of sensing, communication signals are regarded as supplementary to enhance the sensing signals. Let {ψ_1, ψ_2, ⋯, ψ_N_D} and {ϕ_1, ϕ_2, ⋯, ϕ_N_D} denote the sampling points evenly distributed in the elevation and azimuth angle domains, respectively. Then the beam pattern gain 𝐏_S∈ℝ^N_D× N_D of the SIM in the direction {ψ_j, ϕ_k} for j, k = 1,2, ⋯, N_D can be expressed as
[𝐏_S]_j, k=α^H(ψ_j, ϕ_k) 𝐅_SIM𝐅_SIM^H α(ψ_j, ϕ_k),
where α(ψ_j, ϕ_k) is the channel steering vector obtained by (<ref>). Then the normalized beam pattern 𝐏_S is calculated as
𝐏_S = 𝐏_S/𝐏_S_1.
Given the desired beam pattern 𝐏_D∈ℝ^N_D× N_D, we define the mean square error (MSE) between 𝐏_D and 𝐏_S as the beam-matching error <cit.>, calculated as
J_MSE = 𝐏_S - 𝐏_D_2^2.
Here, ·_1 and ·_2 denote the ℓ_1 and ℓ_2 norms, respectively.
§ SIM-BASED TRANSMIT BEAMFORMING DESIGN
§.§ Problem Fomulation
In order to facilitate downlink communication and sensing with desirable performance trade-off, an MOP is formulated to maximize the sum rate of users and concurrently minimize the beam-matching error under uniform transmit power allocation. These can be realized by properly configuring the phase shifts imposed by each meta-atom in the SIM.
Denoting ϑ={θ^1, θ^2, ⋯, θ^L} as the set of optimization variables with θ^l =[θ_1^l, θ_2^l, ⋯, θ_M^l]^T, the MOP is shown as
P1:
max_ϑ R_sum
min_ϑ J_MSE
s.t. [t]
θ_m^l ∈[0, 2π),
∀l = 1,2,⋯,L,
∀m = 1,2,⋯,M.
Following <cit.>, we apply the weighted-sum method to problem (P1), transforming the MOP into a single objective problem (SOP) written as
P2:
min_ϑ J_MSE - R_sum
s.t. [t]
θ_m^l ∈[0, 2π),
∀l = 1,2,⋯,L,
∀m = 1,2,⋯,M.
The problem (P2) is inherently non-convex due to the form of the objective function in (<ref>). Furthermore, the strong coupling among the multiple phase shift matrices across the L layers of the SIM makes the problem even more intractable.
§.§ D^3 Algorithm
To address the aforementioned problem, we propose an efficient Dual-Normalized Differential Gradient Descent (D^3) algorithm to achieve a quasi-optimal solution. More specifically, our approach addresses a MOP that includes both communication and sensing tasks. It includes gradient differences and additional normalization steps that adjust the gradient components to similar scales across different tasks. This dual-normalization ensures that the optimization process achieves a balanced trade-off between communication and sensing tasks, and avoids over-promoting any single objective.
First, the phase shifts θ_m^l ∈ [0, 2π) for l = 1, 2, ⋯, L and m = 1, 2, ⋯, M are initialized as, e.g., a uniform random distribution.
Next, the partial derivatives of J_MSE in (<ref>) with respect to θ_m^l are derived as
∂ J_MSE/∂θ_m^l = 1/N_D^2∑_j=1^N_D∑_k=1^N_D 2([𝐏_S]_j, k-[𝐏_D]_j, k) · [𝐄]_j, k,
where 𝐄 represents the partial derivatives of 𝐏_S in terms of phase shifts θ_m^l, expressed as
[𝐄]_j, k = 𝐏_𝐒_1 [𝐄]_j, k - [𝐏_S]_j, k𝐄_1/𝐏_S_1^2.
Furthermore, 𝐄 denotes the partial derivatives of 𝐏_S with respect to phase shifts θ_m^l, which can be calculated as
[𝐄]_j, k = 2Im{ e^j θ_m^lα(ψ_j, ϕ_k)^H 𝐕_:, m^l
𝐔_m,:^l 𝐅_SIM^H α(ψ_j, ϕ_k) }.
Im{·} denotes the imaginary part, and 𝐕_:, m^l and 𝐔_m,:^l denote the m-th column of 𝐕^l and the m-th row of 𝐔^l, respectively, which are defined by
𝐔^l =𝐖^l Φ^l-1𝐖^l-1⋯Φ^2 𝐖^2 Φ^1 𝐖^1, if l ≠ 1,
𝐖^1, if l=1,
𝐕^l =Φ^L 𝐖^L Φ^L-1𝐖^L-1⋯Φ^l+1𝐖^l+1, if l ≠ L,
𝐈_M, if l=L.
Besides, the partial derivatives of R_sum in (<ref>) with respect to θ_m^l are derived as
∂ R_sum/∂θ_m^l = 2 log_2 e ∑_p=1^N_Cδ_p(η_p, p-γ_p ∑_q=1, q ≠ p^N_Cη_p, q),
where δ_p and η_p, q in (<ref>) are given by <cit.>
δ_p = 1/∑_q=1^N_C|[𝐇𝐅_SIM]_p, q|^2 + σ^2,
η_p, q = Im{[𝐇 𝐕^l]_p, m[𝐔^l]_m, q[𝐇 𝐅_SIM]_p, q^* e^j θ_m^l}.
Inspired by the gradient updating strategy in <cit.>, the partial derivatives obtained from (<ref>) and (<ref>) are then normalized in an element-wise manner. Afterwards, the corresponding differential gradient 𝐆∈ℂ^M × L, can be expressed as
[𝐆]_m, l=w_1∂ J_MSE/∂θ_m^l/√((∂ J_MSE/∂θ_m^l)^2+ϵ)-w_2∂ R_sum/∂θ_m^l/√((∂ R_sum/∂θ_m^l)^2+ϵ),
where w_1 and w_2 denote the weights of the communication and sensing metrics, respectively. ϵ is the smoothing term as a small constant parameter, typically set to 10^-8 <cit.>.
Additionally, in order to mitigate gradient explosion and vanishing issues during optimization <cit.>, global normalization is applied to the differential gradient. Thus, the dual-normalized differential gradient 𝐆 can be expressed as
[𝐆]_m, l = π/max (𝐆)· [𝐆]_m, l,
where m = 1, 2, ⋯, M, l = 1, 2, ⋯, L, and max (·) denotes the operation of extracting the largest element.
Ultimately, the phase shifts θ_m^l ∈ϑ across each meta-atom in SIM can be updated via 𝐆 at each iteration, expressed as
θ_m^l ←θ_m^l-μ·[𝐆]_m, l,
where μ is the step size. Specifically, we adopt an exponentially decreasing learning rate schedule as the iteration proceeds, which is updated by <cit.>
μ←μβ,
where β is a hyperparameter determining the decay rate, satisfying 0 < β < 1.
By iteratively applying (<ref>)-(<ref>), the objective function (<ref>) will reach convergence when its decrease is smaller than a preset threshold or reaching the maximum iteration number.
§ NUMERICAL RESULTS
This section validates the effectiveness of the transmit beamforming design with the proposed D^3 algorithm. The simulation parameters are summarized in Table <ref>.
The desired beam pattern for sensing is assumed as
[𝐏_D]_j,k =
1 if (j,k) ∈{(9, 27), (27, 9)},
0 otherwise,
where j, k = 1,2, ⋯, N_D correspond to specific elevation and azimuth angles. Specifically, indices 9 and 27 correspond to the angle ranges [-45^∘, -40^∘] and [45^∘, 50^∘], respectively.
To prevent premature convergence to a local optimum due to inappropriate initialization, we generate five sets of initial phase shifts and run the D^3 algorithm in parallel for each set. The algorithm iterates until either the relative change in the objective function falls below a threshold of 10^-6 or the maximum number of 60 iterations is reached. Unless otherwise specified, we set the initial learning rate to η = 1 and the decay parameter to β = 0.5. After parallel execution, we choose the solution that minimizes J_MSE - R_sum as the quasi-optimal solution for the problems (P1) and (P2) <cit.>.
Figure <ref> shows the beam pattern obtained by the proposed beamforming design using the D^3 algorithm, where M = 100, L = 7, and w_1 = w_2 = 1.
It can be observed that the beam pattern gain reaches its maximum (about 2.5 × 10^-6) in the directions towards the sensing targets within the regions of [-45^∘, -40^∘] in elevation angle space and [45^∘, 50^∘] in azimuth angle space, as well as [45^∘, 50^∘] in elevation angle space and [-45^∘, -40^∘] in azimuth angle space, where the beam-matching error is calculated as J_MSE = 0.0526. Meanwhile, the beam peaks for the four communication users are located at (-60^∘, -45^∘), and (-60^∘, -35^∘). Although the average strength of the communication beams is not comparable to the sensing beams, i.e., below the gain of 0.75 × 10^-6, superior data throughput with R_sum = 15 bit/s/Hz can be achieved with the proposed D^3 algorithm. Therefore, the proposed beamforming design successfully realizes a desirable balance between sensing and communication functions.
Figure <ref> illustrates the performance comparison of different weighting coefficients w_1 and w_2 with varying numbers of meta-atoms M. The weighting coefficients w_1 and w_2 dictate the trade-off between sensing and communication performance. When w_1=1 and w_2=0, the D^3 algorithm functions as a sensing-only scheme, while w_1=0 and w_2=1 conversely represents a communication-only scheme. For cases where both w_1 and w_2 are non-zero, the system functions as an ISAC scheme, with their ratio determining the priority of sensing or communication in the system. It can be observed that the sensing-only scheme (w_1=1, w_2=0) exhibits poor communication performance, with sum rates that are almost nil. In contrast, the ISAC scheme with w_1=w_2=1 achieves a sum rate approaching that of the communication-only scheme (w_1=0, w_2=1), while still maintaining satisfactory sensing performance. This illustrates that our suggested D^3 method grants the SIM-enabled ISAC system to achieve both high-performance communication and sensing capabilities concurrently.
Furthermore, Fig. <ref> shows that the sum rate of users increases with the number of meta-atoms M and the number of layers L. This improvement is due to the additional flexibility in shaping the electromagnetic wavefront provided by the multi-layer massive metasurface array, which can enable more sophisticated beamforming patterns to better accommodate the dual-functional requirements.
Figure <ref> depicts the convergence of the proposed D^3 algorithm. The solid lines represent beam-matching error (J_MSE, left y-axis), while the dashed lines show the sum rate of users (R_sum, right y-axis) over 60 iterations. Each color corresponds to a different channel realization. Notably, the D^3 algorithm converges rapidly in all cases, achieving its optimal value after approximately 15 iterations. This demonstrates that the proposed algorithm possesses the capacity to effectively adapt to different channel conditions, hence confirming its efficacy and reliability in practical ISAC systems.
§ CONCLUSION
This paper proposed a novel beamforming design for ISAC applications under the framework of transmit SIM-based antenna array. To realize a desirable dual-function performance trade-off, a non-convex MOP was established by simultaneously maximizing the communication sum rate and optimally shaping the sensing beam pattern by adjusting the reflection coefficients of the meta-atoms. To solve this problem, the D^3 algorithm was proposed, which effectively balances the trade-off between communication and sensing objectives through gradient differences and dual normalization. Numerical results confirmed the superiority of our proposed ISAC beamforming design under various channel conditions.
IEEEtran
|
http://arxiv.org/abs/2409.02671v1 | 20240904130026 | Compression of high-power laser pulse leads to increase of electron acceleration efficiency | [
"O. E. Vais",
"M. G. Lobok",
"V. Yu. Bychenkov"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
P. N. Lebedev Physics Institute,
Russian Academy of Science, Leninskii Prospect 53, Moscow 119991,
Russia
Center for Fundamental and Applied Research,
Dukhov Research Institute of Automatics (VNIIA), Moscow 127055, Russia
P. N. Lebedev Physics Institute,
Russian Academy of Science, Leninskii Prospect 53, Moscow 119991,
Russia
Center for Fundamental and Applied Research,
Dukhov Research Institute of Automatics (VNIIA), Moscow 127055, Russia
P. N. Lebedev Physics Institute,
Russian Academy of Science, Leninskii Prospect 53, Moscow 119991,
Russia
Center for Fundamental and Applied Research,
Dukhov Research Institute of Automatics (VNIIA), Moscow 127055, Russia
§ ABSTRACT
Propagation of ultrarelativistically intense laser pulse in a self-trapping mode in a near critical density plasma makes it possible to produce electron bunches of extreme parameters appropriate for different state of art applications. Based on the 3D PIC simulations, it has been demonstrated how the best efficiency of electron acceleration in terms of the total charge of high-energy electrons and laser-to-electrons conversion rate can be achieved. For given laser pulse energy the universal way is a proper matching of laser hot spot size and electron plasma density to the laser pulse duration. The recommendation to achieve the highest yield of high-energy electrons is to compress laser pulse as much as possible. As example, compression of the few tens fs pulse to the ∼ 10 fs pulse leads to generation of the high-energy electron bunch with the highest total charge to exhibit conversion efficiency exceeding 50% for the Joule-level laser pulse energies.
Compression of high-power laser pulse leads to increase of electron acceleration efficiency
V. Yu. Bychenkov
September 4, 2024
===========================================================================================
§ INTRODUCTION
Laser-plasma accelerators are of high interest due to their extremely large accelerating field gradients that allows receiving high-energy electron beams of application demand on a scale of hundred microns. Depending on laser-plasma parameters, there are a number of underlying mechanisms, which lead to different space-energy distributions of the accelerated particles. Ones, such as the laser wakefield accelerator (LWFA), the
plasma beat wave accelerator (PBWA), the self-modulated LWFA and so on, consist in the electron acceleration by the longitudinal field of plasma waves exciting by laser pulses in low density plasma <cit.>. In the highly nonlinear regime, a solitary laser-plasma structure, "bubble” <cit.>, is formed allowing, in particular, the use of a higher plasma density. In a rare plasma, spectra of accelerated electrons have the monoenergetic feature with the peak at the highest energies up to several GeV <cit.>. The total charge of such particles is typically at the pC level. Another mechanism, named as direct laser acceleration (DLA), is associated with the acceleration of electrons being in the betatron resonance with the laser frequency <cit.>. It appears when a picosecond high-energy laser pulse propagates in a denser plasma by forming a plasma channel. In this case, the high-energy part of an electron spectrum has the exponential form but demonstrates high total charge at the level of a few microcoulombs <cit.>. Although the spectra of electrons accelerated in the "bubble" and DLA regimes are different, both of them are very effective showing that the conversion rate of the laser energy to the high-energy electron can be as high as ≈ 20% <cit.>.
For the ultrarelativistic laser intensities, the recent studies identified and substantiated a stable regime of laser pulse propagation and electron acceleration in the near critical density plasma <cit.>. The balance of diffraction divergence and relativistic nonlinearity (relativistic mass increase and cavitation) in a plasma provides the soliton laser-plasma structure <cit.> – "laser bullet", which keeps its shape with only weak swelling for many Rayleigh lengths. This regime is physically similar to self-trapping of weak electromagnetic waves, which is described by the Schrödinger equation with a cubic nonlinearity <cit.>, that is why this regime was named as a relativistic self-trapping regime (RST) <cit.>. The laser pulse forms the plasma cavity, which is fully filled by the trapped light. The RST regime in the form of "laser bullet" complements the previously observed "bubble" regime of RST <cit.>, which occurs for the pulse length, L, shorter than its transversal size, D. Thus, the "bubble" and "laser bullet" regimes clearly appear under two different characteristic conditions, when the pulse duration is shorter than the cavity collapse time (D/c), i.e. D≫ L, or comparable to the latter, L≃ D, correspondingly.
Accelerated particles are affected by the laser pulse field as well as by the plasma cavity field in the RST "laser bullet" regime. It has been shown <cit.>, that the laser pulse affects an injection of electrons and an angular particle distribution, while electrostatic cavity field mainly contributes to electron energy gain. The RST regime results in a substantial proportion of high-energy particles, that for near critical density plasma causes record total charge of the generated electron bunch as compared to other acceleration mechanisms by lasers of the same energy. The advantage of such high electron charge has already been used to predict highly efficient bright synchrotron radiation <cit.>. Here, the further development of the control methods of RST mode is presented with the aim of its optimization by a pulse duration.
Previously, we have studied the RST regime for the 30 fs duration high-power laser pulses, in excess of 100 TW and demonstrated the record charge for the generated sub-GeV electron bunches at multi-nC level <cit.>. At the same time, the question to which extent a laser pulse shortening may improve electron characteristics is still open. Moreover, this question has become very relevant in the light of recent achievements in shortening high-power laser pulses. State-of-the-art technology makes it possible shortening the multi-Joule energy femtosecond laser pulses almost without loss of energy with so-called CafCA (compression after compressor approach) <cit.>. We have already discussed how pulse shortening affects RST regime for moderate (multi-TW) laser pulse power <cit.>. At the same time, this question requires broader consideration over a wider range of laser energies, in particular, to cover the most interesting for applications laser-plasma parameters. Here we consider ultra-relativistic case, a_0≫ 1, and compare electron acceleration with the basic pulse (40 fs) and that after his 4-fold shortening up to PW-level laser pulse power. For the long (40 fs) laser pulse, we consider different regimes of its propagation to analyze in details the each of them features to make sure no way to have advantage over the shortest pulse. Our aim is to quantify by using 3D PIC simulations how the total charge of high-energy electrons, conversion efficiency and characteristic particle energy change (1) with pulse duration at the given laser energy and (2) with laser pulse energy at different pulse duration.
This paper is organized as follows. Section <ref> summarizes rough estimates for the characteristics of the laser-plasma structure and the electron bunch accelerated. In sections <ref> and <ref>, we consider the simulations of the RST regime of the laser propagation and the laser self-modulation, respectively. Then we discuss the energy spectra of accelerated electrons and corresponding characteristics of electron bunch, which were obtained in previous sections. After that, in section <ref> we analyze the results for another laser energies to generalize our findings to a wider range of laser parameters.
§ ROUGH DIRECTIONS
It has been shown before <cit.> that for the relativistic laser intensities the RST-regime is able to provide stable laser pulse propagation in a near-critical density plasma over many Rayleigh lengths in the form of plasma cavity filled by a laser field (the illustration is Fig. <ref>). As a result, a maximum total charge of high-energy electron bunch and, correspondingly, conversion efficiency are achieved. This occurs in the matching condition for the cavity diameter D, the electron plasma density n_e, and the laser field amplitude,
D ≈λ_p ≃ 2.6 c/ω_l√(a_0n_c/n_e) , a_0≫ 1 ,
where ω_l and ω_p are the laser light and electron plasma frequencies, respectively, λ_p=(2π c/ω_p)√(γ) is the plasma wavelength, γ=√(1+a_0^2/2)≃ a_0/√(2) is the electron relativistic factor, a_0=e E_L/m_eω_l c is the standard dimensionless laser field amplitude (E_L), e and m_e are the electron charge and mass, and n_c is the critical electron plasma density. Cavity diameter is of the order of a laser focal spot and slowly evolves as it propagates. To ensure a stable pulse evolution to the RST-regime the laser focal spot diameter, D_L, should be somewhat less than the steady-state one Eq. (<ref>) <cit.>.
The condition Eq. (<ref>) has been deduced from the qualitative geometric-optical treatment of RST <cit.> and nonlinear Schrödinger equation approach <cit.>.
As demonstrated in Ref. <cit.>, Eq. (<ref>) can be derived from ad hoc replacement of the electron mass to the relativistic electron mass in the weak-field analogue of the matching condition for the laser beam self-trapping in the nonlinear medium with cubic nonlinearity corresponding to a_0≪1 <cit.>. The requirement for the matched transversal cavity size Eq. (<ref>) also follows from the balance of the radial ponderomotive force of the laser pulse and the Coulomb force of the ion channel <cit.>. On the other hand, numerical simulations confirm this matching with only some change in the numeric factor in Eq. (<ref>) due to different initial conditions for initiation of RST, i.e. D ≃ 2.24c√(a_0n_c/n_e)/ω_l <cit.>, D ≃ 4c√(a_0n_c/n_e)/ω_l <cit.>, 2c√(a_0n_c/n_e)/ω_l <cit.>, etc. For short laser pulse, with the length considerably shorter than the focal spot size the RST cavity is the electron-empty sphere with a laser "snow plow" ahead. This RST-regime was first observed in the 3D PIC simulations <cit.> and later widely referenced as the “bubble” regime. The study <cit.> showed that the number of electrons accelerated in this mode is proportional to a_0, that is also in the discussed “laser bullet” case (see, Eq. (<ref>)).
In the context of stable propagation of a laser pulse, it is important to take into account that when its longitudinal or transverse size exceeds the relativistic plasma wavelength the pulse is subject to such instabilities as self-modulation or filamentation, correspondingly, <cit.>. The condition Eq. (<ref>) prevents filamentation of the pulse and to provide a longitudinal stability the pulse length, L=cτ, should be limited as follows
L ≲λ_p, or √(n_e/n_c)≲ 5 √(a_0)/ω_l τ ,
where τ is the laser pulse duration, λ_p is the classical plasma wavelength. At the same time, the denser the plasma, the greater the total charge of accelerated electrons. Because of that, we accept in Eq. (<ref>) the upper limit for the electron density,
exclude the inequality in Eq. (<ref>) and use
n_e/n_c ≈ 25 a_0/( ω_l τ)^2 .
This choice of density leads to approximately equality of the laser pulse length to the laser-plasma cavity diameter, i.e. Eq. (<ref>) reads
cτ≈ D .
Below we consider the order of magnitude acceleration characteristics scalings make it possible from rough parametric estimates.
While the laser beam waist size reduction can be achieved with larger apertures, the laser pulse duration can be diminished by spectrum broadening in nonlinear crystals <cit.>. So, for the laser pulses with the same energy, W_L, it's worth to consider different pulse durations and the case of near-spherical cavity with the aim to reach most efficient electron acceleration under the condition
W_L ∼ a_0^2n_c m_ec^5τ^3=const .
Then, for this optimum case of the near spherical laser pulses, Eqs. (<ref>) – (<ref>) result to the following scaling
n_e ∝√(W_L/τ^7) ,
which shows rather sharp density increase with pulse duration decrease.
The spectrum, electron bunch total charge and average and maximum particle energies are usually considered as indicative characteristics, which demonstrate an efficiency and interest to applications of the considered laser-plasma accelerator. It is natural to assume that similar to Refs. <cit.> the total charge of the accelerated particles, Q_0, in the RST regime is proportional to the number of self-injected into the cavity electrons, which, in turn, is expected to be proportional to the cavity Coulomb charge, i.e.
Q_0 ∼ e n_e D^3 ∝ a_0D ∝√(W_L/τ) .
Similar scaling of the number of accelerated electrons derived for a_0 ≫ 1 has also been presented in Ref. <cit.> for the quasimonoenergetic electrons in the bubble.
The characteristic electron energy gain, ε_e, is given by eEl_acc, where E ∼ (m_ec^2/e)× (ω_p/c)^2 D ∝ n_e D is the maximum plasma cavity Coulomb field and l_acc is the acceleration length <cit.>. In the RST regime, when the laser propagation distance far exceeds the Rayleigh length the value l_acc is determined by the lesser of dephasing length, l_dph, and pump depletion length, l_dpl. The dephasing length corresponds to the distance which trapped electrons travel until they enter the decelerating cavity field phase and the pump depletion length is the distance, over which a laser pulse losses its energy as result of electron raking by light pressure and producing the cavity electrostatic field. In the case of modulation-stable laser pulse propagation, when the condition Eq. (<ref>) is satisfied, one gets l_dpl∼ l_dph, i.e. the acceleration distance is l_acc∼ l_dpl, where <cit.>:
l_dpl∝ a_0 cτ(n_c/n_e) .
Correspondingly, the electron characteristic energy scaling can be presented in the form:
ε_e ∝ a_0 D τ∝ (W_L τ)^1/2 .
These scaling can also be addressed to the "bubble" regime of a wide laser pulse, cτ≪ D, where only small front part of a cavity is filled with laser light during entire propagation distance (cf. the formula (1) from Ref. <cit.>). Note, that for the optimum near spherical laser bullet the last two scalings transform to
l_dpl∝ D^3
and
ε_e ∝ (W_L D)^1/2 ,
correspondingly.
Applying these qualitative estimates and scalings to a laser pulse with given energy, one can conclude from (<ref>) that shorter pulse duration is preferred to reach higher electron bunch charge in the case of near spherical laser bullet. As concerning conversion efficiency of the laser-to-electrons energy, η, its estimate can be insufficient because of very simplified guess η∝ Q_0×ε_e ∝ W_L, which does not show τ-dependence and also because of ignoring a possible impact of the pulse duration on the electron spectrum shape. For this reason, we performed the 3D PIC simulations, the results of which are presented below to quantify introduced electron characteristics and find to which extent the above rough directions can work.
§ SIMULATIONS OF THE RST "LASER BULLET" REGIME
By using the high performance electromagnetic 3D PIC code <cit.> we have studied the laser pulse propagation in the RST regime and corresponding electron acceleration for different pulse durations at the given laser energy W_L ≃ 2.2 J. The linear polarized laser pulse with the wavelength λ = 1 μm propagated in the x-direction. It was focused on the front side of a fully-ionized homogeneous dense gas plasma target consisting of He-ions and electrons. The plasma model is justified due to the relativistically intense laser pulses considered, which easily ionize the target either by a natural prepulse or by the pulse itself at its very leading edge. To analyze a pulse duration impact on the electron acceleration, the Gaussian laser pulse FWHM duration, τ, was varied in the range of 10 – 40 fs. Note, that PW laser pulses of ∼ 10 fs duration are now available with CafCA-shortening for the standard femtosecond multi-Joule lasers <cit.>. The simulations with a moving window used the spatial grid steps 0.02λ× 0.1λ× 0.1λ. The simulation window size was chosen to be adequate to the laser pulse size within a plasma, X× Y × Z = 58λ× 60λ× 60λ.
First, we considered propagation of the 10 fs laser pulse with a_0 ≈ 42 in a near-critical density plasma, n_e = 0.15 n_c. Laser-plasma dynamics is illustrated in Fig. <ref> by evolution of the electron density, the E_z-component of the laser pulse and the longitudinal electric field E_x. The FWHM focal spot, D_L, was 2.8 μm, i.e. laser pulse had a spherical form satisfying Eq. (<ref>). At same the time, for a_0>>1 it is approximately twice less than the cavity diameter D <cit.>. Simulation shows that the cavity diameter quickly sets and then changes only in a quasi-stationary manner, slowly increasing during pulse propagation from D≃ 11.5 λ (Fig. <ref>, middle frames) to D≃ 13 λ (Fig. <ref>, end frames).
Simulation demonstrates that conditions Eqs. (<ref>), (<ref>), (<ref>) are reasonably suited to laser pulse capture in the formed plasma single cavity fully filled by light ("laser bullet").
Such a strongly nonlinear laser-plasma structure propagates through the target over approximately 80 μm without plasma wave formation. During propagation, the laser pulse continuously depletes due to strong etching at the leading edge and because of that the "laser bullet" regime transforms finally to the "bubble" regime <cit.>. This is clearly seen at the top of Fig. <ref>. Electrons are injected into the cavity at its rear side throughout the entire time. Initially, they are accelerated by the laser field and the longitudinal cavity field, but as the laser pulse shortens, its role in acceleration disappears. The dynamics of the electron bunch (in black) acceleration and the E_x-field evolution are illustrated at the bottom of Fig. <ref>. The bunch of accelerated electrons is modulated in the xz-plane. The explanation could be related to the carrier-envelope phase effects (CEP), that could manifest itself in the form of electron bunch modulation <cit.>.
For the 40 fs laser pulse of the same energy and a_0 ≈ 10, we chose the maximum plasma density n_e = 0.005n_c, for which the laser pulse self-modulation is still absent, and increased the laser focal spot size to 5.5 μm. However this increase was not enough for the considered electron concentration. The laser beam propagating through the plasma more expanded to 8 μm with a_0 ≈ 7, until the diffraction divergence was not balanced by the relativistic self-focusing, i.e. the laser-plasma structure was attracted by the RST-regime.
In the steady state, the plasma cavity diameter was about 24 μm (Fig. <ref>, middle frames), which also slowly increased to 35 μm during the pulse propagation (Fig. <ref> right).
Although the laser pulse length is greater than its steady-state diameter, it was not enough to seed laser modulations. It is in accordance with the prediction of the paper <cit.>, where for self-modulation effect a laser pulse should have a duration of several plasma-wave periods (namely L/λ_p > 3).
In the optimum (near-spherical) RST regime for a given laser pulse energy, a depletion length substantially depends on the pulse duration and significantly increases with the latter increase. From the estimate Eq. (<ref>) the depletion length ratio for short (1) and long (2) pulses is l_dpl^(1)/l_dpl^(2)≈ (D^(1)/D^(2))^3≈ 23.3. Thus the depletion length increases from ≈ 80 μm to ≈ 2 mm that was observed in the PIC-simulations.
In the case of the 40 fs pulse duration, a plasma wave behind the light-carrying soliton cavity (see Fig. <ref>) is excited in contrast to a single soliton cavity for the 10 fs pulse. Nevertheless, the plasma wave acceleration of the electrons is negligible as compared to the electron acceleration in the first light-carrying cavity.
We attribute a plasma wave excitation for the 40 fs pulse to the not high enough a_0, which should significantly exceed 1 (a_0≳ 10) for the ideal "laser bullet" regime. In accordance with Eq. (<ref>), the requirement for high laser field amplitude limits the pulse duration as follows
(cτ)^3 ≪W_L/n_cm_ec^2 ,
i.e. the higher pulse energy the longer pulse duration is acceptable for the "laser bullet" RST regime. Nevertheless, for the considered laser energy, the proper laser-plasma matching makes it possible to avoid self-modulation instability for the 40 fs pulse and to provide the self-injection of electrons <cit.>, that could not be realized for the laser pulses with lower energies
<cit.>. Note also, that in this regime the injected electrons interact with the laser field that leads to particle oscillations in the polarization plane with the period approximately equal to the laser wavelength <cit.>.
For given laser power, the easiest way to increase of a_0 is to decrease of D_L, that can be achieved by optics with shorter focal length. In this case the laser length can be noticeably greater than the beam diameter cτ > D. However, this is the case of a laser self-modulation and the question naturally arises whether or not the corresponding mismatched regime is able to lead to enough effective electron acceleration, comparable to that from the "laser bullet" RST regime at the above considered matched condition Eq. (<ref>). Another interesting question is whether or not such mismatched regime could evolve to the laser bullet one, since the soliton nature of the latter might behave as an attractor for certain values of laser-plasma parameters. The answers to these questions are in the next section.
§ LASER SELF-MODULATION REGIME
Here, we again consider the 40-fs 2.2 J laser pulses but with increased a_0 by tighter focusing. To analyze the nonlinear evolution of the self-modulation process when the condition Eq. (<ref>) is violated, we chose D_L = 4.2 μm and D_L=2.8 μm. In both cases the normalized laser-filed amplitude exceed 10 (about 14 and 21). The densities of a plasma target were taken 0.02n_c and 0.065n_c, respectively. Figure <ref> shows the dynamics of these laser pulses during their propagation through the plasma. The laser pulse lengths exceeded about 3 and 4 times the beam diameters that leads to the self-modulation of the laser pulse. This is clearly seen in middle frames in Fig. <ref>. The self-modulation observed is related to a transverse redistribution of the pulse energy. Such transverse laser energy loss is in accordance with prediction of the linear theory by Andreev et al. <cit.>. The greater cτ/D_L, the greater transverse energy transport. So, we expect a decrease in the efficiency of electron acceleration compared to the cases discussed in Sec. <ref>.
After the transverse energy release has occurred, the near spherical cavities filled by light (laser bullets) are formed (Fig. <ref>).
The laser pulses completely deplete at the distances l_dpl^(1)≃ 750 μm and l_dpl^(2)≃ 340 μm in the plasma target with the densities of 0.02n_c and 0.065n_c, respectively. It agrees with the scaling Eq. (<ref>) for non-spherical laser pulses. It gives for the depletion length ratio l_dpl^(1)/l_dpl^(2)≃ (D^(1)/D^(2))^2 ≃ 2.37 that well matches the PIC result: l_dpl^(1)/l_dpl^(2)≃ 750/340.
Simulations show that the main electron acceleration occurs after the laser bullet regime is established.
We have discussed two regimes of the laser propagation through the plasma target. Both of them was followed by the acceleration of the self-injected electron bunch. To analyze the efficiency of this process, we consider below the electron spectra and their characteristics. Moreover this analysis can answer the question whether shortening the laser pulse leads to the growth of the acceleration efficiency.
§ ELECTRON BUNCH CHARACTERISTICS
In all considered cases (Secs. <ref> and <ref>), electron energy distributions demonstrate a plateau on a logarithmic scale with some quasi-monoenergeticity on a linear scale at the high-energy part of the spectrum near the cutoff as displayed in Fig. <ref>. The characteristic energy of the accelerated electrons in the optimum near spherical laser bullet Eq. (<ref>) depends on the steady-state diameter, see Eq. (<ref>). So for near spherical laser pulses of the same energy, a decrease of the cavity diameter results in the average electron energies decrease. This also happens for increased pulse length subject to self-modulation. From Tab. <ref> it is clearly seen that for a 40-fs laser pulse a decrease of D leads to the drop of the average electron energies from 250 MeV (0.005n_c) to 95 MeV (0.065n_c). Here
the average electron energy ε_> 30 MeV, electron bunch charge Q_> 30 MeV and conversion rate η_> 30 MeV are calculated for the particles with the energies exceeding 30 MeV.
Unlike the average energy gain of accelerated electrons, the total charge of electron bunches decreases with D, although the conversion rate η_> 30 MeV slowly increases. In general, it is seen that a bunch charge is higher for a shorter pulse (cf. the top and bottom rows in Tab. <ref>). However, for the 40 fs pulse, plasma density increase leads to the growth of the total electron bunch charge from 3 nC for 0.005n_c to 6.7 nC for 0.065n_c not displayed by the estimate Eq. (<ref>). This is because Eq. (<ref>) is derived for the RST condition Eq. (<ref>) and its accuracy is not enough to describe total charge increase for the mismatched conditions promoting self-modulation instability. It is worth noting that the further increase of the plasma density (>0.065n_c) in our simulations results in the total charge decrease with n_e. We attribute the latter to more significant laser pulse energy losses due to the laser self-modulation and confirm the existence of an optimum (over the accelerated charge) regime mismatched with RST (cf. <cit.>). As concerning comparison the results for the spherical "laser bullets" with different pulse durations (the top and bottom rows in Tab. <ref>), the higher total charge of electron bunch is achieved for smaller spatio-temporal sizes of the laser pulse, that is in agreement with Eq. (<ref>), Q_0 ∝τ^-1/2. However the charge ratio Q_10 fs/Q_40 fs for the RST regime is also somewhat higher than predicted from rough estimate (3 instead of 2). Although such accuracy 33%-50% is quite reasonable for simple estimate, one can assume that difference between numerical and theoretical results can be overcome with more accurate model accounting for the effect of already trapped in cavity electrons on the entire injection dynamics, the laser depletion length and the cavity gamma-factor <cit.>.
Conversion of the laser energy into the accelerated electrons energy is roughly proportional to the total number of accelerated particles and their averaged energy. The shortest and most tightly focused laser pulse for the RST conditions is beneficial to provide highest conversion rate that is clearly demonstrated by our simulations and estimates. For the 10 fs pulse a large value of a_0 makes it possible a strongly nonlinear regime of self-focusing in the form of laser bullet, which gives unprecedented conversion rate, 53%. For longer pulses, which cannot ensure the absence of self-modulation at the initial stage of pulse propagation, the conversion rate is lower even though the RST mode is eventually established. For the relevant examples with a 40 fs laser pulse and a plasma with certain densities, higher lateral transfer energy losses are realized for smaller cavities, which are formed in denser plasma targets. In a plasma with density n_e=0.065n_c a conversion rate is 26%, while for n_e = 0.02n_c it was already 35% (cf. second and third rows in Tab. <ref>). On the other side, for larger cavity and lower density plasma, when laser pulse did not undergo self-modulation, a conversion rate stops to increase since laser field reaches its marginal magnitude for RST (a_0 ∼ 10).
Thus, for a given laser energy a pulse compression makes it possible to reach maximum both total charge of accelerated electron bunch and conversion rate. We also found that initially modulation unstable in rather dense plasma may evolve to the RST steady-state laser bullet with a high enough total bunch charge conversion efficiency. As a final study, in the next section, we refuse the condition of the same energy but consider the same laser focal spot to learn more on RST and electron energy gain.
§ ELECTRON ACCELERATION BY PULSES OF DIFFERENT ENERGIES
A series of simulations were performed for laser pulses with different energies and for the same focal spot size (2.8 μm). The target densities were chosen to provide the plasma cavities with approximately the same diameters. Figure <ref> demonstrates the electron spectra for the parameters shown at right. It is seen how characteristic particle energy grows with laser pulse energy.
Calculations of the total bunch charge and the conversion efficiency were done for the high-energy electrons with ε>ε_min by using the energy cutoff, ε_min, corresponding to the beginning of a plateau in the electron spectra, approximately 15, 30 and 70 MeV for the laser energy of 0.55, 2.2 and 20 J, respectively. The table <ref> summarizes the data for these charges and conversion rates.
For any laser energy considered, the maximum bunch charge is achieved for the shortest laser pulse. The same or even more pronounced is observed for the conversion rate. Note the good versatility on energy of the conversion rate value, which turns out to be approximately at the same level for the same durations.
As a final point, we check the scalings discussed in Sec. <ref>, i.e. whether the simulation data really follow the dependencies on the laser energy proposed there using the examples of 10 fs laser pulses.
For the options discussed, the pulse energies were in the ratio 1:2^2:6^2. The ratio of the total charges from simulations, Q_> ε_min, was 1:2:6 (see fifth column of the Tab. <ref>). This well follows the square root dependence, Eq. (<ref>), of the total charge on the laser energy.
The ratio of the average electron energies for the range ε_> ε_min was found from simulations as 74 MeV, 150 MeV and 415 MeV for 0.55, 2.2 and 20 J laser energies, correspondingly. This ratio reads 1:2:5.5 and also corresponds well to the square root estimate Eq. (<ref>). Thus, we conclude that the scalings from Sec. <ref> can be used for rough predictions of the electron bunch performance in experiments with various laser installations. Similar scalability of the laser-plasma accelerators has been also demonstrated for the bubble regime <cit.>.
§ CONCLUSION
We have demonstrated that to provide most efficient conversion of laser energy to electron bunch energy the pulse duration should be as short as possible. To ensure formation of the laser bullet, when the conversion rate is maximum, ultra relativistic intensities are required, a_0 > 10. This can be achieved by laser pulse compression, e.g. by using CafCA approach <cit.> and makes it possible for electron acceleration in the denser plasmas. In the denser medium the generated particle bunch gains the higher total charge, e.g. 3 nC and 10 nC for the laser pulses of the same energy, 2.2 J, and of 40 fs and 10 fs durations, correspondingly. In rather dense plasma the longer laser pulses undergo self-modulation that results in additional energy loss. However, there may be cases where the total accelerated charge can be higher than in the low dense plasma.
It has been shown that the characteristics of the electron bunch are scalable with laser pulse energy and duration according to simple estimates, namely Q_0 ∝√(W_L/τ) and ε_max∝ (W_L τ)^1/2.
Whereas to provide the highest conversion rate and total charge electron bunch with reasonably high but not extreme energies the compressed laser pulse interacting with the high density plasma is preferable, the production of monoenergetic particle beam with the highest energy requires low density plasmas.
The research performed could be of interest as a base for radiation-nuclear applications, such as betatron and Bremsstrahlung x-ray/gamma sources, photo-nuclear neutron and isotope production, meson factory, radiotherapy electron source, etc. Note, that high efficiency of the electron production with RST regime already opens the way for such applications with currently available commercial lasers.
This work was supported in part by the Ministry of Science and Higher Education of the Russian Federation (agreement no. 075-15-2021-1361) and the Theoretical Physics and Mathematics Admancement Foundation “BASIS” (grant no. 22-1-3-28-1).
99
Esarey_2009 E. Esarey, C. B. Schroeder, and W. P. Leemans, Rev. Mod. Phys. 81, 1229 (2009).
Pukhov_2002 A. Pukhov and J. Meyer-ter-Vehn, Appl. Phys. B: Lasers Opt. 74, 355 (2002).
Wang_2013 X. Wang, R. Zgadzaj, N. Fazel, Zh. Li, S. A. Yi, Xi Zhang, W. Henderson, Y.-Y. Chang, R. Korzekwa, H.-E. Tsai, C.-H. Pai, H. Quevedo, G. Dyer, E. Gaul, M. Martinez, A. C. Bernstein, T. Borger, M. Spinks, M. Donovan, V. Khudik, G. Shvets, T. Ditmire, and M. C. Downer, Nat. Commun. 4, 1988 (2013).
Clayton_2010 C. E. Clayton, J. E. Ralph, F. Albert, R. A. Fonseca, S. H. Glenzer, C. Joshi, W. Lu, K. A. Marsh, S. F. Martins, W. B. Mori, A. Pak, F. S. Tsung, B. B. Pollock, J. S. Ross, L. O. Silva, D. H. Froula, Phys. Rev. Lett. 105, 105003 (2010).
Pukhov_1999 A. Pukhov, Z.-M. Sheng, and J. Meyer-ter-Vehn, Phys. Plasmas 6, 2847 (1999).
Gahn_1999 C. Gahn, G. D. Tsakiris, A. Pukhov, J. Meyer-ter-Vehn, G. Pretzler, P. Thirolf, D. Habs, K. J. Witte, Phys. Rev. Lett. 83, 4772 (1999).
Mangles_2005 S. P. D. Mangles, B. R. Walton, M. Tzoufras, Z. Najmudin, R. J. Clarke, A. E. Dangor, R. G. Evans, S. Fritzler, A. Gopal, C. Hernandez-Gomez, W. B. Mori, W. Rozmus, M. Tatarakis, A. G. R. Thomas, F. S. Tsung, M. S. Wei, K. Krushelnick, Phys. Rev. Lett. 94, 245001 (2005).
Rosmej_2020 O. N. Rosmej, M. Gyrdymov, M. M. Günther, N. E. Andreev, P. Tavana, P. Neumayer, S. Zähter, N. Zahn, V. S. Popov, N. G. Borisenko, A. Kantsyrev, A. Skobliakov, V. Panyushkin, A. Bogdanov, F. Consoli, X. F. Shen and A. Pukhov, Plasma Phys. Control. Fusion 62, 115024 (2020).
Gordienko_2005 S. Gordienko and A. Pukhov, Phys. Plasmas 12, 043109 (2005).
Bychenkov_2019 V. Yu. Bychenkov, M. G. Lobok, V. F. Kovalev, and A. V. Brantov, Plasma Phys. Control. Fusion 61, 124004 (2019).
Lobok_2019 M. G. Lobok, A. V. Brantov, and V. Yu. Bychenkov, Phys. Plasmas 26, 123107 (2019).
Kovalev_2020 V. F. Kovalev and V. Yu. Bychenkov Phys. Rev. E 99, 043201 (2019); V. Yu. Bychenkov and V. F. Kovalev, Radiophysics and Quantum Electronics 63, 742 (2021).
Talanov_1964 V. I. Talanov, Izv. Vysshikh Uchebn. Zavedenii, Radiofiz. 7, 564 (1964).
Chiao_1964 R. Y. Chiao, E. Garmire, C. Townes, Phys. Rev. Lett. 13, 479 (1964).
Ahmanov_1966 S. A. Akhmanov, A. P. Sukhorukov, R. V. Khokhlov, Soviet Phys. JETP 23, 1025 (1966).
Lobok_2021 M. G. Lobok, I. A. Andriyash, O. E. Vais, V. Malka, V. Yu. Bychenkov, Phys. Rev. E 104, L053201 (2021).
Khazanov_2019 E. A. Khazanov, S. Yu. Mironov, G. Mourou, Phys.-Usp. 62, 1096 (2019).
Ginzburg_2020 V. Ginzburg, I. Yakovlev, A. Zuev, A. Korobeynikova, A. Kochetkov, A. Kuzmin, S. Mironov, A. Shaykin, I. Shaikin, E. Khazanov, G. Mourou, Phys. Rev. A 101, 013829 (2020).
our_JETPhLetters O. E. Vais, M. G. Lobok, A.A. Soloviev, S.Yu. Mironov, E. A. Khazanov, V. Yu. Bychenkov, Jetp. Lett. 118, 875-880 (2023).
Lobok_2018 M. G. Lobok, A. V. Brantov, D. A. Gozhev, and V. Yu. Bychenkov, Plasma Phys. Control. Fusion 60, 084010 (2018).
Sun_1987 G.-Zh. Sun, E. Ott, Y.C.Lee, and P. Guzdar, Phys. Fluids 30, 526 (1987)
Lu_2006 W. Lu, C. Huang, M. Zhou,W. B. Mori, and T. Katsouleas, Phys. Rev. Lett. 96, 165002 (2006)
Kostyukov_2004 I. Kostyukov, A. Pukhov, S. Kiselev, Physics of Plasmas. 11, 5256 (2004).
Lu_2007 W. Lu, M. Tzoufras, C. Joshi, F. S. Tsung, W. B. Mori, J. Vieira, R. A. Fonseca, and L. O. Silva, Phys. Rev. ST Accel. Beams 10, 061301 (2007).
Poder_2024 K. Põder, J.C. Wood, N.C. Lopes, J.M. Cole, S. Alatabi et al., Phys. Rev. Lett. 132, 195001 (2024).
Katsouleas_1987 T. Katsouleas, S. Wilks, P. Chen, J. M. Dawson and J. J. Su, Particle Accelerators 22, 81 (1987).
Jansen_2014 O. Jansen, T. Tückmantel, and A. Pukhov, Eur. Phys. J. Spec. Top. 223, 1017 (2014).
Decker_1996 C. D. Decker, W. B. Mori, K.C. Tzeng, and T. Katsouleas, Phys. Plasmas 3, 2047 (1996).
VORPAL C. Nieter and J. R. Cary, J. Comput. Phys. 196, 448 (2004).
Kovalev_2024 V. Yu. Bychenkov and V. F. Kovalev, submitted to JETP Lett.
Nerush_2009 E. N. Nerush and I. Yu. Kostyukov, Phys. Rev. Lett. 103, 035001 (2009).
Andreev_1995 N.E. Andreev, V.I. Kirsanov, L.M. Gorbunov, Phys. Plasmas 2, 2573 (1995).
Mangles_2012 S. P. D. Mangles, G. Genoud, M. S. Bloom, M. Burza, Z. Najmudin, A. Persson, K. Svensson, A. G. R. Thomas, and C.-G. Wahlström Phys. Rev. ST Accel. Beams 15, 011302 (2012)
Faure_2004 J. Faure, Y. Glinec, A. Pukhov, S. Kiselev, S. Gordienko, E. Lefebvre, J.-P. Rousseau, F. Burgy, V. Malka, Nature 431, 541 (2004).
Nemeth_2008 K. Németh, B. Shen, Yu. Li, H. Shang, R. Crowell, K. C. Harkay, J.R. Cary, Phys. Rev. Lett. 100, 095002 (2008).
IEEE_TRANS._PLASMA_SCIENCE-1996 N.E. Andreev, V.I. Kirsanov, L.M. Gorbunov, A.S. Sakharov, IEEE Trans. Plasma Sci. 24, 363 (1996).
Perevalov_2020 S. E. Perevalov, K. F. Burdonov, A. V. Kotov, D. S. Romanovskiy, A. A. Soloviev, M. V. Starodubtsev, A. A. Golovanov, V. N. Ginzburg et al. Plasma Phys. Control. Fusion 62, 094004 (2020).
Kostyukov_2009 I. Kostyukov, E. Nerush, A. Pukhov, V. Seredov, Phys. Rev. Lett. 103, 175003 (2009).
Pukhov_2006 A. Pukhov and S. Gordienko, Phil. Trans. R. Soc. A 364, 623 (2006).
|
http://arxiv.org/abs/2409.03468v1 | 20240905122336 | Dynamics of Small Solid Particles on Substrates of Arbitrary Topography | [
"Quan Zhao",
"Wei Jiang",
"Yan Wang",
"David J. Srolovitz",
"Tiezheng Qian",
"Weizhu Bao"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
1]Quan Zhao
[1]School of Mathematical Sciences, University of Science and Technology of China, Hefei, China
2]Wei Jiang8
[2]School of Mathematics and Statistics, Wuhan University, Wuhan, China
[email protected]
[8]Corresponding author.
3]Yan Wang
[3] School of Mathematics and Statistics, and Key Lab NAA–MOE Central China Normal University, Wuhan, China
4]David J. Srolovitz
[4]Department of Mechanical Engineering, The University of Hong Kong, Hong Kong SAR, China
5]Tiezheng Qian
[5]Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong, China
6]Weizhu Bao
[6]Department of Mathematics, National University of Singapore, Singapore
§ ABSTRACT
We study the dynamics of a small solid particle arising from the dewetting of a thin film on a curved substrate driven by capillarity, where mass transport is controlled by surface diffusion.
We consider the case when the size of the deposited particle is much smaller than the local radius of curvature of the substrate surface.
The application of the Onsager variational principle leads to a reduced-order model for the dynamic behaviour of particles on arbitrarily curved substrates.
We demonstrate that particles move toward region of the substrate surface with lower mean curvature with a determined velocity.
In particular, the velocity is proportional to the substrate curvature gradient and inversely proportional to the size of the particle, with a coefficient that depends on material properties that include the surface energy, surface diffusivity, density, and Young's (wetting) angle.
The reduced model is validated by comparing with numerical results for the full, sharp-interface model in both two and three dimensions.
Surface diffusion, Onsager principle, Solid-state dewetting, Substrate curvature gradient, Wasserstein distance
§ INTRODUCTION
The large and increasing diversity of thin film-based technological applications has led to growing interest in how thin films dewet from or islands forms on substrates (e.g., see <cit.>). Initially continuous thin films are often unstable and dewet or agglomerate to form isolated particles/island in order to minimize the total interfacial energies.
These observed morphology changes are mainly due to capillary effects, which most commonly occurs via diffusional mass transport along surface <cit.>.
In the case of isotropic surface energy, the normal velocity of the evolving interface is proportional to the surface Laplacian of the local mean curvature <cit.>.
The dynamics of the contact line, where the thin film/vapor interface meets the substrate, is an additional, important kinetic feature in the evolution of the morphology.
In particular, at the contact line, the force or line tension balance along the substrate implies an equilibrium contact angle (i.e., Young's law <cit.>).
A growing body of experimental and theoretical efforts has focused on the solid state dewetting mechanisms (see e.g., <cit.>).
In general, the film surface morphology evolution is influenced by many parameters including film thickness, film microstructure, surface tension anisotropy, relative surface and interface energies, stress in the film, and the elastic constants of the film and substrate.
More recently, templated dewetting has drawn significant attention; this refers to patterning the shape of the substrate or the film to produce the desired dewetted island microstructures.
Indeed, experiments have shown how topographically patterned substrates can be employed to produce particles of near-uniform size in patterned arrays <cit.>. Moreover, recent experiments on deposited thin films of platinum on a sinusoidally modulated alumina substrate have provided the clear evidence about the migration of particles from convex to concave sites of the substrates <cit.>. While most theoretical and simulation studies focus on flat substrates, relatively little attention has been devoted to dewetting on not-flat substrates (despite their growing interest in experimental studies).
In <cit.>, a continuum simulation approach was used to study dewetting on a sinusoidal substrate leading to a period array of islands, and some related research works can also be found in <cit.>. A mathematical understanding of island dynamics on topographically patterned substrates would be valuable to guide the precise control of the dewetting process to produce desired self-assembled islands through templating <cit.>.
Recent years have seen a great deal of experimental and theoretical activity focused on the dynamics of liquid droplets on curved substrates, see e.g., <cit.>.
In such a case, the driving force for evolution is related to the local substrate curvature gradient which drives rapid droplet motion.
Chen and Xu <cit.> derived a reduced-order model for this type of dynamical system and demonstrated quantitative agreement with experiment.
Their model was based on the Onsager variational principle <cit.>, which provides a useful approximate framework for describing irreversible thermodynamic processes.
This approach is based on the minimization of the Rayleighian, which has contributions from the free energy change rate and dissipation function which are described in terms of a small number of suitable state variables.
This leads to a set of ordinary differential equations for these state variables and forms a reduced-order model (see below Section <ref> for a brief description of this variational principle).
In this paper, we apply the Onsager principle to provide a reduced-order model for the motion of solid particles on curved substrates, where the dynamics is controlled via surface diffusion (unlike fluid droplets evolving under viscous momentum transport).
For ease of analysis, we assume that the interface energies are isotropic, the elastic effects of the thin films are negligible and no chemical reactions or phase transformations occur.
We focus on the case of a small particle on a topographically-patterned substrate, in which the length scale of the particle is much smaller than the radius of curvature of any point on the substrate surface.
We show how the total free energy of the system can be related to the substrate curvature <cit.> and show how the dissipation function is uniquely determined by the surface normal velocity in two dimensions (2D).
In three dimensions (3D), we develop an alternative approach to compute the dissipation function by connecting it with the Wasserstein metric <cit.>, which leads to a constrained minimization problem.
We demonstrate that in both 2D and 3D that the new derived model is both simple and quantitative via several examples.
The remainder of the paper is organized as follows.
In Section <ref>, we introduce the full sharp-interface model for the dewetting of solid thin films on a curved substrate and include a brief review of the Onsager variational principle.
Next, in Section <ref>, we apply this principal to derive a reduced-order model for the dynamics of a 2D particle and validate the model by the numerical results from solving the full model.
In Section <ref>, we generalize this approach to three dimensions. In Section <ref>, we provide some generalizations of the proposed approach to the cases
of chemically inhomogeneous flat substrates and anisotropic solid particles.
Finally, we draw some conclusions in Section <ref>.
§ PREMINARY
In this section, we first introduce the full sharp-interface model for the dewetting system and then give a short review of the Onsager principle.
§.§ The full sharp-interface model
Consier a solid thin film deposited on a curved substrate, as shown in Fig. <ref>.
The film/vapor interface is represented by an open hypersurface (t) in d=2 or 3 dimensions, d∈{2,3}.
We assume a parameterization of the film surface (t) on the reference domain 𝒪⊂^d-1 (1 dimension lower than the space) given by
X⃗(ρ⃗, t): 𝒪×[0,T]↦^d.
The velocity of the interface (t) is
V⃗(X⃗(ρ⃗,t), t) = ∂_tX⃗(ρ⃗,t) X⃗∈(t).
Let n⃗ and ℋ be the unit normal and mean curvature of the interface (t), respectively.
Assuming isotropic diffusion and surface tension, the interface dynamics can be expressed as <cit.>
V_n = -Ω_0∇_s·j⃗,
j⃗ = -D_sν/k_b T∇_sμ,
μ = Ω_0γ_0ℋ,
where V_n=V⃗·n⃗ is the velocity of the interface in the direction of n⃗, j⃗ is the flux of the surface atoms, μ is the chemical potential of a film atom on the surface and ∇_s is the surface gradient operator.
The physical parameters/constants are the volume per atom of the film material Ω_0 , D_s is the surface diffusivity, k_b T is the thermal energy,
ν is the number of diffusing atoms per unit area (in the direction normal to the surface flux vector), and γ_0 is the isotropic surface energy density.
Combining terms, (<ref>) can be rewritten as
V_n = Bγ_0 Δ_sℋ,
where B=D_sνΩ_0^2/k_b T is a material constant and Δ_s is the Laplace-Beltrami (Laplacian) operator.
At the contact line Γ(t) where the film/varpor interface meets the substrate, we impose the following boundary conditions:
(i)attachment condition
V⃗·n⃗_w = 0;
(ii)contact angle condition
n⃗·n⃗_w + cosθ_i=0;
(iii)zero-flux condition
n⃗_c·j⃗ = 0.
Here, n⃗_w is the substrate unit normal (positive pointing towards the substrate interior) and n⃗_c is the conormal vector of Γ(t), as shown in Fig. <ref>.
Moreover, θ_i is the equilibrium, isotropic Young angle, which satisfies
cosθ_i=(γ__-γ__)/ γ_0,
with γ__ and γ__ representing the varpor/substrate and film/substrate surface energy densities, respectively. Condition (ii) can be interpreted as the contact angle condition; this leads to the Young angle θ_i between n⃗ and -n⃗_w at the contact line.
We note that under some conditions, the dynamic contact angle may differ from the Young's contact angle condition (e.g., see <cit.>).
The total free energy of the dynamic system is given by
W(t) = γ_0|(t)|-γ_0cosθ_i A_ sub(Γ(t)),
where |(t)| is the surface area of (t), and A_ sub(Γ(t)) represents the substrate surface area covered by the film/island (i.e., enclosed by Γ(t)).
The free energy of the evolving film/substrate system satisfies
W(t) = -k_b T/D_s ν∫_(t)|j⃗|^2 S≤ 0.
We also assume that the volume of the film material is conserved (we do not consider phase transformations or strains); i.e., (Ω(t)) = (Ω(0)) for all time t≥0.
§.§ Onsager variational principle
The Onsager variational principle was first formulated <cit.> based on the reciprocal symmetry in a linear irreversible thermodynamic process.
This fundamental principle provides a general framework to describe non-equilibrium kinetics in cases where linear response is applicable and has found wide applications in fluid dynamics <cit.>, soft matter physics <cit.> and solid-state deweting <cit.>. We first provide a short review of this principle.
Consider an isothermal system described by a set of time-dependent state variables
β(t) = (β_1(t),β_2(t),…, β_n(t)),
and β̇(t)=(β̇_1(t),β̇_2(t),…,β̇_n(t)) are the rates of change of these state variables (the raised dot “·” denotes a time derivative).
We further introduce W(β) as the total free energy of the system, then
the rates {β̇_i} are determined by miniminzing the Rayleighian <cit.>
ℛ(β̇,β)=Ẇ(β, β̇)+Φ(β̇,β̇),
where Ẇ(β, β̇)=∑_i=1^n(∂ W / ∂β_i)β̇_̇i̇ is the rate of change of the total free energy W
and Φ(β̇,β̇) is the dissipation function.
In the linear response regime, the dissipation function is a quadratic function of the rates {β̇_i}
Φ(β̇,β̇)=1/2∑_i=1^n∑_j=1^nλ_ij(β)β̇_iβ̇_j,
where the friction coefficients {λ_ij} form a positive definite, symmetric matrix.
Minimizing the Rayleighian (<ref>) with respect to rates {β̇_i} yields the kinetic equations
-∂ W/∂β_i=∑_j=1^nλ_ijβ̇_j, i=1,2,…,n,
which precisely gives the force balance between the reversible force -∂ W/ ∂β_i and the dissipative force -∂Φ / ∂β̇_i. Multiplying (<ref>) by β_i and summing (and recalling (<ref>)) yields
Φ(β̇,β̇)=-1/2Ẇ(β,β̇).
This means that the dissipation function is half the rate of the free energy dissipation.
Physically, the variational principle for isothermal systems can also be derived from the maximization of the Onsager-Machlup action that is used for more general non-isothermal systems <cit.>.
In the present work, we apply the Onsager variational principle to describe the dynamics
of a small particle migrating on a curved substrate.
We focus on cases in which the full model, introduced in <ref>, can be approximately described by a finite set of suitable state variables.
Application of the Onsager principle enables us to obtain a reduced model for a continuous dissipative system which is governed by a set of ordinary differential equations for a few state variables.
Note that the full model satisfies the energy dissipation law in (<ref>) and can be obtained as well by applying the Onsager principle in the linear response regime,
where the constitutive equation (<ref>) is derived for the flux j⃗.
§ DYNAMICS IN TWO DIMENSIONS
We now apply the Onsager variational principle in <ref> to derive a reduced-order model for the dynamics of a particle on a substrate in 2D.
The reduced model is then numerically validated by comparisons with the full model in <ref>.
§.§ A reduced-order model
We assume that the particle/island is much smaller than the radius of curvature of the substrate.
The separation of these two length scales leads to two distinct time scales:
one for the island to establish the circular shape and the other for the island to migrate along the substrate.
In particular, the latter is much slower than the former.
Therefore, it is reasonable to assume that the particle will remain circular during its (relatively) slow motion along the substrate.
As shown in Fig. <ref>(a), we assume that the film/vapor interface of the particle is given by section of a (small) circle of radius r(t) and the substrate is locally approximated by a (large) circle of radius R(P), where P(t) represents the intersection point of the substrate and the straight line that connects the centers of the two circles (at all times R≫ r).
This gives rise to a parameterization of the interface profile X⃗(θ, t)=(x(θ, t), y(θ, t))^T as
{[ x(θ,t)=P(t)+r(t)sinθ,; y(θ,t)=r(t)(cosθ-cosθ̂), ]. θ∈[-θ̂,θ̂],
where θ̂=θ_i+α(P) with rsinθ̂ = Rsinα.
The conserved particle area is
A_0= r^2ζ(θ̂) - R^2ζ(α),
where ζ(α) = α -cosαsinα.
For 0 ≤α≪ 1, this yields the following identities
√(A_0)/R := √(sin^2α/sin^2θ̂ ζ(θ̂) - ζ(α)) ,
r/√(A_0) :=√(sin^2α/sin^2α ζ(θ̂) - sin^2θ̂ ζ(α)).
Taylor expanding (<ref>) about α = 0 gives
√(A_0)/R = √(ζ(θ_i))/sinθ_iα + O(α^2),
r/√(A_0) = 1/√(ζ(θ_i)) + O(α).
Recalling (<ref>), the total free energy of the approximate 2D system can be written as
W = 2γ_0(rθ̂ - Rcosθ_i α).
Using (<ref>) in (<ref>) yields
W(P)=2γ_0√(A_0) (√(ζ(θ_i)) + sin^2θ_i/3 √(ζ(θ_i))α + O(α^2)).
Taking the time derivative of the total free energy and using (<ref>), we obtain
Ẇ(P, Ṗ) = 2γ_0 √(A_0) sin^2θ_i/3 √(ζ(θ_i))α^'(P)Ṗ +O(α^2)
≈2γ_0 A_0 sin^3θ_i /3 ζ(θ_i)κ^'(P) Ṗ,
where we introduced the substrate curvature (in 2D) κ(P) = R(P)^-1 and primes denote derivatives with respect to P.
To this point, we restricted ourselves to the case when the curvature of the substrate is positive at the point of interest, implying that R(P(t)) > 0 and α≥ 0, see Fig. <ref>(a).
When the substrate curvature is negative, R(P(t)) < 0; equations (<ref>), (<ref>), (<ref>) and (<ref>) hold as well except that 0<-α≪ 1, as shown in Fig. <ref>(b).
Recalling (<ref>), we write the dissipation function (<ref>) as
Φ=1/2k_b T/D_s ν∫_(t)|j⃗|^2 S =1/2k_b T/D_s ν∫_(t)|J|^2 S,
where J(θ)=j⃗·τ is the magnitude of the flux and τ is the unit interface tangent.
Using the kinematic equation (<ref>), we find
∂_s J(θ) = -1/Ω_0V_n(θ) J(±θ̂)=0,
where θ∈[-θ̂,θ̂] and V_n(θ) is
V_n(θ) = Ṗsinθ + ṙ(1-cosθ_icosθ),
on recalling (<ref>) and n⃗ = (sinθ, cosθ)^T.
Using (<ref>) and integrating (<ref>) then yields
J(θ) = -1/Ω_0∫_-θ̂^θ r V_n(θ) θ
= -1/Ω_0Ṗ r(cosθ_i - cosθ)+ O(α),
where we note ṙ = O(α) because of (<ref>).
Inserting (<ref>) into (<ref>) and using (<ref>), we find
Φ(Ṗ) = 1/2k_b T/D_s ν∫_-θ̂^θ̂|J^2| rθ
= C_2^0(θ_i) B^-1 r^3Ṗ^2 +O(α^2)
≈ C_2^0(θ_i) B^-1√(A_0^3/ζ^3(θ_i))Ṗ^2,
where B=D_sνΩ_0^2/k_B T and
C_2^0(θ_i)=1/2(θ_i + 2θ_icos^2θ_i - 3sinθ_icosθ_i).
Note that for the dynamics in 2D, the flux J(θ) is completely determined from
Ṗ and V_n(θ), and hence the coefficient of Φ(Ṗ)∝Ṗ^2 is
readily obtained from (<ref>)
(this is not the case in 3D).
Using (<ref>) and (<ref>) in (<ref>), and applying the Onsager principle by minimizing the Rayleighian ℛ with respect to Ṗ, we thus obtain the following ODE for the state variable P (see also Equation (16) in <cit.>):
d P/ d t = - Bγ_0 C_2(θ_i)/√(A_0) κ ' (P),
P(0) = P_0,
where
C_2(θ_i) =2sin^3(θ_i) √(θ_i- cosθ_isinθ_i)/3(θ_i+2θ_icos^2θ_i - 3sinθ_icosθ_i).
Equation (<ref>) prescribes the velocity of the particle on the substrate when |α|≪ 1.
It shows that the velocity is proportional to the substrate curvature gradient and is inversely proportional to the length scale of the particle, i.e., √(A_0).
Moreover, the coefficient C_2(θ_i) is a monotone decreasing function for θ_i∈[0,π].
This implies that the velocity tends to zero as θ_i approaches π. Physically, a larger contact angle θ_i implies a weaker coupling between the particle and the substrate (with the contact length or area approaching zero for θ_i→π), and hence the migration velocity Ṗ becomes less responsive to substrate curvature gradients.
§.§ Numerical validation
We choose the length scale L_0, time scale L_0^4/Bγ_0 and use the quantities with hats (·̂) to denote dimensionless physical quantities.
The full model (<ref>) can then be rewritten in dimensionless form as
V̂_n = Δ̂_sℋ̂,
with boundary conditions at the contact line:
(i)attachment condition
V̂⃗̂·n⃗_w = 0;
(ii)contact angle condition
n⃗·n⃗_w + cosθ_i=0;
(iii)zero-flux condition
n⃗_c·∇̂_sℋ̂ = 0.
The reduced model, in dimensionless form, is
dP̂/ dt̂ = - C_2(θ_i)/√(Â_0) κ̂ ' (P̂),
P̂(0) = P̂_0.
We then conduct numerical comparisons between the two models.
We employ a parametric finite element method to solve the full sharp-interface model (see <cit.>).
The system of ODEs for the reduced model are solved via the forward Euler method.
We first perform a full model simulation of the migration of a small particle (Â_0=0.4) on a substrate, which is modeled by a curve satisfying κ̂^'(s)=-0.01, where s is the arc length parameter.
Initially, we place a square particle at a position with κ̂=0.05 and then capture the dynamics of the particle to a position with κ̂ = -0.05.
As shown in Fig. <ref>, we see observe that the particle quickly adopts a circular shape and maintains it while gradually migrates towards a position of lower curvature.
We next conduct a series of simulations for a particle on substrates with different curvature gradients, where the other parameters are fixed (Â_0=1, θ_i=π/3).
The numerical results obtained from the full model are presented in Fig. <ref>(a).
We observe that the position of the particle varies approximately linearly with time and the speed of the particle is proportional to the substrate curvature gradient.
In particular, the speed of the particle is very similar to the analytical results from the reduced model, as shown in Fig. <ref>(b).
To further validate the reduced model, we next study the dependence of the particle velocity on A_0 and θ_i, and the curvature gradient of the substrate is fixed as κ̂^' = -0.01. It can be readily seen that the velocity of the particle is inversely proportional to √(A_0) (Fig. <ref>(c)) and proportional to C_2(θ_i) (Fig. <ref>(d)). In particular, the results show excellent quantitative agreement with the reduced model.
Finally, we investigate the dynamics of a particle on a general sinusoidal substrate described by y = 4sin(x/4) with Â_0=1 and θ_i=π/3.
The numerical results from the full sharp-interface model are compared with those from solving the system of ODEs in the reduced model, (<ref>).
The agreement between the full and reduced model results are in excellent quantitative agreement as observed in Fig. <ref>.
This not only validates our reduced-order model in <ref>, but also
verifies the accuracy of the numerical results from the full sharp-interface model. Moreover, we note that these theoretical results are also consistent with those reported in <cit.>.
§ DYNAMICS IN THREE DIMENSIONS
§.§ The reduced model
Inspired by the work in <ref>, we now consider the extension of the reduced model from 2D to 3D.
Given the position P⃗ =(P_x, P_y, 0)^T on the substrate surface, we locally approximate the substrate at P⃗ as a sphere of radius R(P⃗)=1/κ(P⃗), where κ(P⃗)=1/2(κ_1 + κ_2) is the mean curvature of the substrate with κ_1 and κ_2 being the two principal curvatures. We parameterize the interface (t) by X⃗:=X⃗(θ,ϕ,t)
{[ x(θ,φ,t)=P_x(t)+r(t)sinθcosϕ,; y(θ,φ,t)=P_y(t)+r(t)sinθsinϕ,; z(θ,φ, t)=r(t)(cosθ-cosθ̂), ].
for θ∈[0,θ̂] and ϕ∈[0,2π], where θ̂= θ_i + α with α satisfying Rsinα = rsinθ̂.
The volume of the particle is the volume difference of two spherical caps
V_0 =π/3(r^3 η(θ̂)-R^3 η(α)),
with η(α) = (1-cosα)^2(2+cosα).
The total free energy of the 3D system is
W = 2πγ_0[r^2 (1-cosθ̂) - R^2 cosθ_i (1-cosα)].
Using (<ref>) in (<ref>) leads to
W(P⃗) = 2γ_0√(9 V_0^2 π/η^2(θ_i)) (1-cosθ_i-1/2cosθ_isin^2θ_i)
+3γ_0 V_0/2(1+cosθ_i)^2/(2+cosθ_i)1/R(P⃗) + O(α^2).
Using the fact that the substrate curvature κ(P⃗) = 1/R(P⃗), the time derivative of the energy becomes
Ẇ(P⃗, Ṗ⃗̇) ≈3γ_0 V_0/2(1+cosθ_i)^2/(2+cosθ_i)∇_Γκ·Ṗ⃗̇,
where ∇_Γ represents the curvature gradient along the substrate surface.
In contrast to the 2D case, the flux contribution on the spherical cap in 3D cannot be uniquely determined by Eq. (<ref>) although the normal velocity of the interface is given.
Nevertheless, the dissipation function can be connected with and interpreted as the Wasserstein distance in the framework of minimum dissipation <cit.>.
This leads to a constrained minimization problem:
min_j⃗Φ(Ṗ⃗̇)=1/2k_b T/D_s ν∫_(t)|j⃗|^2 S
∇_s·j⃗ = -Ṗ⃗̇·n⃗/Ω_0 (t),
j⃗·n⃗_c=0 Γ(t).
in which the dissipation function Φ(Ṗ⃗̇) is obtained through minimization
with respect to j⃗ subject to the constraint imposed by the continuity equation.
This system satisfies rotational invariance, meaning that
Φ(Ṗ⃗̇)∝ |Ṗ⃗̇|^2.
We then introduce the dimensionless flux
j̃⃗̃=j⃗/r Ω^-1 |Ṗ⃗̇|,
to obtain
Φ(Ṗ⃗̇) = 1/2k_b T/D_s ν∫_(t)r^2Ω^-2|Ṗ⃗̇|^2 |j̃⃗̃|^2 S
=1/2k_b T/D_sνΩ_0^2 r^4 |Ṗ⃗̇|^2 ∫_(t)|j̃⃗̃|^2
=k_b T/D_sνΩ_0^2 (3 V_0/π η(θ_i))^4/3 |Ṗ⃗̇|^2 m(θ_i),
where (t) is S(t) scaled by the dimension r and m(θ_i)=1/2∫_(t)|j̃⃗̃|^2 is a dimensionless function of the Young angle θ_i.
In practice, we compute m(θ_i) via the minimization problem (<ref>) as described in <ref>.
Combining Eqs. (<ref>) and (<ref>) with Eq. (<ref>) and applying the Onsager principle (minimizing the Rayleighian ℛ with respect to Ṗ⃗̇), we obtain our reduced-order model for the motion of a particle in 3D.
The particle velocity is
dP⃗/ d t = - Bγ_0 C_3(θ_i)/√(V_0) ∇_Γκ(P⃗),
P⃗(0) = P⃗_0,
where
C_3(θ_i) = π/4(1-cos^2θ_i)^2/m(θ_i)√(π η(θ_i)/3).
Note that the application of the Onsager variational principle to the dynamics in 3D
consists of two steps:
(i) Obtaining the dissipation function Φ(Ṗ⃗̇) by minimizing the rate of dissipation
with respect to j⃗ subject to the constraint imposed by the continuity equation
for a prescribed Ṗ⃗̇;
(ii) Determination of the migration velocity Ṗ⃗̇ by minimizing the Rayleighian ℛ with respect to Ṗ⃗̇.
The particle moves in a direction opposite the curvature gradient, meaning that the trajectory of the particle is purely determined by the substrate topography.
While the other physical parameters (e.g.,B, γ_0, V_0 and θ_i) only affect how fast the particle moves in this particular trajectory.
§.§ Numerical validation
To numerically confirm the reduced model for particle motion in 3D, Eq. (<ref>), we compare these and the full model, Eq. (<ref>).
Similar to the 2D case, we normalize all lengths by L_0 and times by L_0^4/Bγ_0 such that the reduced model can be written in the dimensionless form as
dP̂⃗̂/ dt̂ = - C_3(θ_i)/√(V̂_0) ∇̂_Γκ̂(P̂⃗̂),
P̂⃗̂(0) = P̂⃗̂_0.
We solve for the particle trajectory via the forward Euler method.
For the full model, as described in <ref>, we employ parametric finite element approximations - see <cit.>.
As shown in Fig. <ref>(a), we consider an egg-carton shape substrate surface
z(x,y) = 1/5[sin^2(x/2)+sin^2(y/2)].
We choose θ_i=π/2 and place a small volume particle (V̂_0=20^-3) at the point (x_0, y_0, z(x_0,y_0)).
In our computation, we start with two different initial positions: (x_0,y_0)=(π+1/10, π+1/2) and (x_0,y_0)=(π+3/10, π+1/10).
The trajectory of the particle in the reduced and full models are compared in Fig. <ref>(b) and Fig. <ref>(c).
Note that the two models show excellent agreement over the entire trajectories, confirming our reduced model for the dynamics of the particle in 3D.
To further assess the reduced model, we next consider the more general cases by varying the volume of the particle and Young angle θ_i.
We compare the time history of the x-position of the particle (the initial particle position was (x_0,y_0)=(π+1/10, π+1/2)) from the two models by fitting m(θ_i) in (<ref>).
The results are shown in Figs. <ref>(d)-(f).
Again, we observe excellent consistency between the full and reduced models.
In our final numerical test, we focus on the verification of the dissipation function (<ref>) in the reduced model.
m(θ_i) in (<ref>) can be computed in a minimization framework, as discussed in <ref>.
For the full model, we introduce an analogue of m(θ_i) as
m̂(θ_i)= 1/2 |P̂⃗̂/t̂|^2(π η(θ_i)/3 V̂_0)^4/3∫_(t)|∇̂_sĤ|^2 Ŝ.
If the dissipation law for the migrating particle is adequately approximated by (<ref>), then m̂(θ_i) should be similar to the constant m(θ_i) in time.
We test this for several different Young angles (V_0=L_0^3/80^3).
The time history of m̂(θ) as well as the constant m(θ_i) are shown in Fig. <ref>.
Indeed, we observe a small oscillation of m̂(θ_i) about a constant which is slightly larger than m(θ_i).
These discrepancies may be associated with the errors in the asymptotic approximations in the reduced model or numerical errors in the computation of the full sharp-interface model, see <ref>.
Finally, we assess the quantitative agreement between m(θ_i) and m̂(θ_i) as functions of θ_i in Fig. <ref>.
Again, the agreement is good and consistent across two orders of magnitude from 10^-2 to 1. Furthermore, m(θ_i), obtained from the constrained minimization, is everywhere very close to m̂(θ_i) from the full model computation, as expected since m(θ_i) is a theoretical value in the minimum dissipation framework, while m̂(θ_i) is a numerical value measured in simulation. The relative error may be attributed to the following:
(i) In the full model computation, the particle is not small enough compared to
the radius of curvature of the substrate, implying that the equilibration of the particle shape is not fast enough compared with the migration along the substrate as assumed in the reduced model;
(ii) The numerical results produced in the full model computation are not sufficiently accurate;
we observe that when the parameter θ_i is near π (e.g., 3π/4),
a large oscillation in m̂(θ_i) occurs which may make the full model numerical computation (see <ref>) unreliable.
This further suggests the utility of the reduced-order model (<ref>) for such particle migration studies.
§ GENERALIZATIONS
§.§ For chemically inhomogeneous substrates
We employed our framework of the Onsager principle to the case of deposited thin films on a flat and chemically inhomogeneous substrate. This will drive the particle to transport over the substrate in a similar behaviour of the droplet, see <cit.>. We assume that the material parameter θ_i of the substrate is a function of the position, and is varying pretty slowly. Thus there is a fast time scale for the thin film to form a circular shape and a slow time scale for the migration of the particle.
In the 2D case, using (<ref>) and (<ref>) and letting R→+∞, it is not difficult to obtain that
W(P) = 2γ_0 √(A_0 ζ(θ_i)),
of which taking time derivative gives us
Ẇ(P,Ṗ) = γ_0 √(A_0) ζ^'(θ_i) θ_i^'(P) Ṗ/√(ζ(θ_i)).
We note that the dissipation function for the interface will stay unchanged and can be computed by (<ref>) as well. Now combining (<ref>) and (<ref>) and applying the Onsager principle yields the following ODE for P,
d P/ d t = - 1/2B γ_0/A_0 ζ(θ_i) ζ^'(θ_i)/C_2 d^0(θ_i) θ_i^'(P),
P(0) = P_0.
This implies that the velocity of the particle is proportional to the gradient of the Young's angle θ_i. Moreover, the parameter θ_i will also determine how fast the particle move over the substrate.
For the 3D particle on the flat and chemically inhomogeneous substrates, on recalling (<ref>), we have
W(P⃗) = 2γ_0√(9 V_0^2 π) C_0(θ_i),
where
C_0(θ_i) = [η(θ_i)]^-2/3(1-cosθ_i-1/2cosθ_isin^2θ_i).
Taking time derivative of W(P⃗) then leads to
Ẇ(P⃗, Ṗ⃗̇) = 2γ_0 √(9 V_0^2 π) C_0^'(θ_i)∇_Γθ_i(P⃗)·Ṗ⃗̇,
where ∇_Γ is again the gradient along the substrate surface. On recalling the dissipation function (<ref>) in 3D and using the Onsager principle, we obtain the following ODE system for the particle velocity
dP⃗/ d t = - Bγ_0/√(V_0^2) (π^5η^4(θ_i)/9 m^3(θ_i))^1/3C_0^'(θ_i) ∇_Γθ_i(P⃗),
P⃗(0) = P⃗_0,
which implies that the particle moves in a direction that is aligned with the gradient of the parameter θ_i.
§.§ Discussions on anisotropic surface energies
We next consider the case when the surface energy of the thin film is anisotropic and modeled by a convex anisotropy function γ(n⃗). Again, we assume that the size of the particle is much smaller than the curvature radius of the substrate surface so that there exists a short time scale for the particle to form an anisotropic quasi-static shape and a long time scale for the particle to migrate along the substrate.
On the short time scale, we assume that the particle forms a shape that is no longer circular or spherical but depends on γ(n⃗) and also the local substrate topology, i.e., the unit normal n⃗_w to the substrate surface.
Starting from a flat substrate surface with a constant n⃗_w, we note that the zeroth-order particle shape
𝒮_γ can be constructed via the Winterbottom construction <cit.> or the well-known Cahn-Hoffman vector formulation <cit.>.
As the substrate surface becomes gently curved with a nonzero ∇_Γn⃗_w,
the particle shape and the free energy W(P⃗) will be slightly modified by ∇_Γn⃗_w.
To the first order, we consider the linear dependence of W(P⃗) on ∇_Γn⃗_w by noting that
the orientation of 𝒮_γ relative to the principal axes of ∇_Γn⃗_w is involved.
This is totally different from the isotropic case, where the particle shape possesses a rotational symmetry so that the linear dependence of W(P⃗) on ∇_Γn⃗_w is
through Tr[∇_Γn⃗_w], i.e., the mean curvature of the substrate surface, see (<ref>).
To compute the dissipation function Φ(Ṗ⃗̇), we can employ a technique similar to the isotropic case in the framework of constrained minimum dissipation. This leads to the minimization problem (<ref>),
where the anisotropic effects are manifested in the integration over the anisotropic interface 𝒮_γ
and the constraint which involves the unit normal n⃗ of 𝒮_γ.
In summary, our variational reduced-order modeling approach can also be employed to deal with solid thin films with anisotropic surface energies on topologically patterned substrates, although
the particle migration will exhibit much more complicated trajectories.
§ CONCLUSIONS
We studied the dynamics of a small particle on curved substrates driven by surface/interface energies and controlled by surface diffusion with a moving contact line.
We observed two distinct time scales for the dynamical behaviour of the particle.
Small particles evolves towards their equilibrium shape (circular or spherical section in 2D or
3D) on a fast time scale.
The small particle move along the substrate in order to adapt itself to the substrate topography on a slow time scale.
We derived a reduced-order model to describe this particle migration using the Onsager variational principle.
The particle moves in a direction in which the curvature of the substrate decays most quickly.
The particle velocity is proportional to a material constant B=D_sνΩ_0^2/k_b T, the surface energy density γ_0, and inversely proportional to the size of the particle.
The reduced model was numerically confirmed by comparing with numerical results obtained by numerical solution of the full dynamical model.
The main overall conclusion is that the trajectory of small particle on a substrate is solely determined by the substrate topography, while only the rate of motion is controlled by material properties.
This may provide some insights into the understanding of the templated dewetting on patterned substrate.
§ ACKNOWLEDGEMENTS
This work was partially supported by the National Natural Science Foundation of China Nos. 12271414 and 11871384 (W.J.) and
No. 12371395 (Y.W.), the NSF Division of Materials Research through Award 1507013 (D.J.S.),
Hong Kong RGC grants CRF No. C1006-20WF and GRF No. 16306121 (T.Q.),
and the Ministry of Education of Singapore under its AcRF Tier 2
funding MOE-T2EP20122-0002 (A-8000962-00-00) (W.B.).
§ COMPUTATIONAL METHOD FOR THE FULL MODEL (<REF>)
The computation of the full model (<ref>) is based on the following formulation <cit.>:
X̂⃗̂·n⃗ = Δ̂_sĤ,
Ĥn⃗ = -Δ̂_sX̂⃗̂.
A weak formulation is introduced for (<ref>), where the contact angle condition (<ref>) and the zero-flux condition (<ref>) are implemented via the variational formulation (see <cit.> for details).
To ensure satisfaction of the attachment condition (<ref>), the velocity of the contact line is forced to be tangential to the substrate surface.
We employ a piecewise linear element method in space and a backward Euler discretization in time to obtain a parametric approximation, where the interface surface is approximated as a polyhedron.
This discretization implies that contact points may not exactly lie on the curved substrate at the following time step.
Therefore, an orthogonal projection of these points onto the substrate surface is required; this may incur some additional numerical errors beyond that associated with the discretization of the geometric equation.
§ THE DISSIPATION FUNCTION
We introduce a dimensionless flux on a spherical section of the surface
j̃⃗̃ = (θ,ϕ) e⃗_θ +(θ,ϕ)e⃗_ϕ,
where e⃗_θ and e⃗_ϕ represent the unit vector in the axial and azimuthal directions, respectively.
To calculate the dissipation function in 3D, we consider the constrained minimization problem (<ref>) which is rewritten in the spherical coordinate and in terms of j̃⃗̃ as
min_j̃⃗̃1/2k_b T r^4/D_s ν Ω_0^2∫_0^2π∫_0^θ_i(^2 + ^2)sinθθϕ,
∂( sinθ)/∂θ +∂/∂ϕ = -(Ṗ⃗̇/|Ṗ⃗̇|·n⃗^ϕ)sin^2θ,
(θ_i,ϕ) = 0,
for θ∈[0,θ_i] and ϕ∈[0,2π], where n⃗^ϕ = (cosϕ, sinϕ, 0)^T.
We then discretize the problem in (θ,ϕ)∈[0,θ_i]×[0,2π].
We introduce the following notations:
h_θ = θ_i/N, θ_j = j h_θ 0≤ j≤ N,
h_ϕ = 2π/M, ϕ_k = k h_ϕ 0≤ k≤ M,
and _jk≈(θ_j,ϕ_k), _jk≈(θ_j,ϕ_k) and rewrite (<ref>) in the discrete form.
In particular, the objective function, up to a multiplicative constant, is approximated as
m(θ_i) =1/2∫_0^2π∫_0^θ_i(^2+^2) sinθθϕ
≈1/2∑_j=1^N∑_k=1^M(_jk + _jk)2πθ_i/NM,
where
_jk=1/4∑_l_1=0,1∑_l_2=0,1_(j-l_1)(k-l_2)^2sinθ_j-l_1,
and _jk is defined similarly.
We introduce the finite difference discretization operators as follows:
δ_θ_jk =_j+1,k-_j-1,k/2h_θ, δ_ϕ_jk=_j,k+1-_j,k-1/2h_ϕ,
δ_θ^+_jk =_j+1,k-_j,k/h_θ, δ_θ^-_jk=_j,k-_j-1,k/h_θ,
and observe that _0,k and _0,k make no contributions to the objective function (because sin0=0).
The constraints are discretized naturally as
* for j=1, 1≤ k≤ M,
δ_θ^+(sinθ)_1k+δ_ϕ_1k=-[Ṗ⃗̇/|Ṗ⃗̇|·n⃗^ϕ_1k]sin^2θ_1k,
* for 1< j < N and 1≤ k≤ M,
δ_θ(sinθ)_jk+δ_ϕ_jk=-[Ṗ⃗̇/|Ṗ⃗̇|·n⃗^ϕ_jk]sin^2θ_jk,
* for j=N, 1≤ k≤ M,
_N,k=0.
This is the discrete minimization problem for the objective function (<ref>) with the constraints (<ref>); it is a quadratic minimisation problem with linear constraints for {_jk} and {_jk} for j=1,…, N and k=1,…, M.
Thus it can be solved directly as a linear system corresponding to the Karush-Kuhn-Tucker (KKT) condition.
§ REFERENCES
10
url<#>1urlprefixURL href#1#2#2 #1#1
Thompson12
C. V. Thompson, Solid-state dewetting of thin films, Annu. Rev. Mater. Res. 42
(2012) 399–434.
Leroy16
F. Leroy, F. Cheynis, Y. Almadori, S. Curiotto, M. Trautmann, J. Barbé,
P. Müller, How to control solid state dewetting: A short review, Surf.
Sci. Rep. 71 (2) (2016) 391–409.
Naffouti17
M. Naffouti, R. Backofen, M. Salvalaglio, T. Bottein, M. Lodari, A. Voigt,
T. David, A. Benkouider, I. Fraj, L. Favre, A. Ronda, I. Berbezier,
D. Grosso, M. Abbarchi, M. Bollani, Complex dewetting scenarios of ultrathin
silicon films for large-scale nanoarchitectures, Sci. Adv. 3 (11) (2017)
eaao1472.
Srolovitz86a
D. J. Srolovitz, S. A. Safran, Capillary instabilities in thin films: I.
energetics, J. Appl. Phys. 60 (1) (1986) 247–254.
Mullins57
W. W. Mullins, Theory of thermal grooving, J. Appl. Phys. 28 (3) (1957)
333–339.
Young1805
T. Young, An essay on the cohesion of fluids, Philos. Trans. R. Soc. London 95
(1805) 65–87.
Ye11b
J. Ye, C. V. Thompson, Templated solid-state dewetting to controllably produce
complex patterns, Adv. Mater. 23 (13) (2011) 1567–1571.
Amram12
D. Amram, L. Klinger, E. Rabkin, Anisotropic hole growth during solid-state
dewetting of single-crystal Au–Fe thin films, Acta Mater. 60 (6-7) (2012)
3047–3056.
Jiang12
W. Jiang, W. Bao, C. V. Thompson, D. J. Srolovitz, Phase field approach for
simulating solid-state dewetting problems, Acta Mater. 60 (15) (2012)
5578–5592.
Jiang16
W. Jiang, Y. Wang, Q. Zhao, D. J. Srolovitz, W. Bao, Solid-state dewetting and
island morphologies in strongly anisotropic materials, Scripta Mater. 115
(2016) 123–127.
Zucker16
R. V. Zucker, G. H. Kim, J. Ye, W. C. Carter, C. V. Thompson, The mechanism of
corner instabilities in single-crystal thin films during dewetting, J. Appl.
Phys. 119 (12) (2016) 125306.
Jiang19xi
W. Jiang, Q. Zhao, Sharp-interface approach for simulating solid-state
dewetting in two dimensions: a Cahn-Hoffman ξ-vector formulation,
Physica D 390 (2019) 69–83.
Jiang2018curved
W. Jiang, Y. Wang, D. J. Srolovitz, W. Bao, Solid-state dewetting on curved
substrates, Phys. Rev. Mater. 2 (11) (2018) 113401.
JiangZB20
W. Jiang, Q. Zhao, W. Bao, Sharp-interface model for simulating solid-state
dewetting in three dimensions, SIAM J. Appl. Math. 80 (4) (2020) 1654–1677.
Boc22stress
F. Boccardo, F. Rovaris, A. Tripathi, F. Montalenti, O. Pierre-Louis,
Stress-induced acceleration and ordering in solid-state dewetting, Phys. Rev.
Lett. 128 (2) (2022) 026101.
Garcke23diffuse
H. Garcke, P. Knopf, R. Nürnberg, Q. Zhao, A diffuse-interface approach for
solid-state dewetting with anisotropic surface energies, J. Nonlinear Sci.
33 (2) (2023) 34.
Giermann05
A. L. Giermann, C. V. Thompson, Solid-state dewetting for ordered arrays of
crystallographically oriented metal particles, Appl. Phys. Lett. 86 (12)
(2005) 121903.
Cheng06templated
J. Y. Cheng, C. A. Ross, H. I. Smith, E. L. Thomas, Templated self-assembly of
block copolymers: top-down helps bottom-up, Adv. Mater. 18 (19) (2006)
2505–2521.
Wang11
D. Wang, R. Ji, P. Schaaf, Formation of precise 2D Au particle arrays via
thermally induced dewetting on pre-patterned substrates, Beilstein J.
Nanotechnol. 2 (1) (2011) 318–326.
Wang2013solid
D. Wang, P. Schaaf, Solid-state dewetting for fabrication of metallic
nanoparticles and influences of nanostructured substrates and dealloying,
Physica Status Solidi (a) 210 (8) (2013) 1544–1551.
Lu16nanostructure
L.-X. Lu, Y.-M. Wang, B. M. Srinivasan, M. Asbahi, J. K. Yang, Y.-W. Zhang,
Nanostructure formation by controlled dewetting on patterned substrates: A
combined theoretical, modeling and experimental study, Sci. Rep. 6 (1) (2016)
32398.
Ruffino17experimental
F. Ruffino, Experimental analysis on the molten-phase dewetting characteristics
of AuPd alloy films on topographically-structured substrates, Metals 7 (9)
(2017) 327.
Ahn80
T.-M. Ahn, J. K. Tien, P. Wynblatt, Coarsening kinetics of platinum particles
on curved oxide substrates, J. Catal. 66 (2) (1980) 335–346.
Klinger12
L. Klinger, E. Rabkin, Capillary-driven motion of nanoparticles attached to
curved rigid substrates, Acta Mater. 60 (17) (2012) 6065–6075.
lv14substrate
C. Lv, C. Chen, Y.-C. Chuang, F.-G. Tseng, Y. Yin, F. Grey, Q. Zheng, Substrate
curvature gradient drives rapid droplet motion, Phys. Rev. Lett. 113 (2)
(2014) 026101.
Galatola18s
P. Galatola, Spontaneous capillary propulsion of liquid droplets on substrates
with nonuniform curvature, Phys. Rev. Fluids 3 (10) (2018) 103601.
Mccarthy19
J. McCarthy, D. Vella, A. A. Castrejón-Pita, Dynamics of droplets on cones:
self-propulsion due to curvature gradients, Soft Matter 15 (48) (2019)
9997–10004.
Chen21self
Y. Chen, X. Xu, Self-propulsion dynamics of small droplets on general surfaces
with curvature gradient, Phys. Fluids 33 (8) (2021) 082107.
Sykes22droplet
T. C. Sykes, B. D. Fudge, M. A. Quetzeri-Santiago, J. R. Castrejón-Pita,
A. A. Castrejón-Pita, Droplet splashing on curved substrates, J. Colloid
Interface Sci. 615 (2022) 227–235.
Onsager31a
L. Onsager, Reciprocal relations in irreversible processes. I., Phys. Rev.
37 (4) (1931) 405.
Onsager31b
L. Onsager, Reciprocal relations in irreversible processes. II., Phys. Rev.
38 (12) (1931) 2265.
Reina15entropy
C. Reina, J. Zimmer, Entropy production and the geometry of dissipative
evolution equations, Phys. Rev. E 92 (5) (2015) 052117.
Van23thermodynamic
T. Van Vu, K. Saito, Thermodynamic unification of optimal transport:
Thermodynamic uncertainty relation, minimum dissipation, and thermodynamic
speed limits, Phys. Rev. X 13 (1) (2023) 011013.
Cahn94
J. W. Cahn, J. E. Taylor, Surface motion by surface diffusion, Acta Metall.
Mater. 42 (4) (1994) 1045–1063.
Karim22
A. Mohammad Karim, A review of physics of moving contact line dynamics models
and its applications in interfacial science, J. Appl. Phys. 132 (8) (2022)
080701.
Qian06
T. Qian, X.-P. Wang, P. Sheng, A variational approach to moving contact line
hydrodynamics, J. Fluids Mech. 564 (2006) 333–360.
Qian17
X. Xu, T. Qian, Hydrodynamic boundary conditions derived from Onsager's
variational principle, Procedia IUTAM 20 (2017) 144–151.
Xu16
X. Xu, Y. Di, M. Doi, Variational method for liquids moving on a substrate,
Phys. Fluids 28 (8) (2016) 087101.
Di18
Y. Di, X. Xu, J. Zhou, M. Doi, Thin film dynamics in coating problems using
Onsager principle, Chin. Phys. B 27 (2) (2018) 024501.
Man16
X. Man, M. Doi, Ring to mountain transition in deposition pattern of drying
droplets, Phys. Rev. Lett. 116 (6) (2016) 066101.
Zhang22effective
Z. Zhang, X. Xu, Effective boundary conditions for dynamic contact angle
hysteresis on chemically inhomogeneous surfaces, J. Fluid Mech. 935 (2022)
A34.
Doi11
M. Doi, Onsager's variational principle in soft matter, J. Phys. Condens.
Matter 23 (28) (2011) 284118.
Doi13book
M. Doi, Soft matter physics, Oxford University Press, 2013.
Doi15
M. Doi, Onsager principle as a tool for approximation, Chin. Phys. B
24 (020505) (2015) 1674–1056.
JZOnsager
W. Jiang, Q. Zhao, T. Qian, D. J. Srolovitz, W. Bao, Application of onsager's
variational principle to the dynamics of a solid toroidal island on a
substrate, Acta Mater. 163 (2019) 154–160.
Suo97
Z. Suo, Motions of microscopic surfaces, Adv. Appl. Mech. 33 (1997) 193–294.
Barrett20
J. W. Barrett, H. Garcke, R. Nürnberg, Parametric finite element
approximations of curvature driven interface evolutions, Handb. Numer. Anal.
(Andrea Bonito and Ricardo H. Nochetto, eds.) 21 (2020) 275–423.
BGNZ23
W. Bao, H. Garcke, R. Nürnberg, Q. Zhao, A structure-preserving finite
element approximation of surface diffusion for curve networks and surface
clusters, Numer. Methods Partial Diff. Equ. 39 (1) (2023) 759–794.
Malinowski20advances
R. Malinowski, I. P. Parkin, G. Volpe, Advances towards programmable droplet
transport on solid surfaces and its applications, Chem. Soc. Rev. 49 (22)
(2020) 7879–7892.
Winterbottom67
W. Winterbottom, Equilibrium shape of a small particle in contact with a
foreign substrate, Acta Metall. 15 (2) (1967) 303–310.
Hoffman72
D. W. Hoffman, J. W. Cahn, A vector thermodynamics for anisotropic surfaces: I.
fundamentals and application to plane surface junctions, Surface Science 31
(1972) 368–388.
|
http://arxiv.org/abs/2409.03455v1 | 20240905120717 | Data-free Distillation with Degradation-prompt Diffusion for Multi-weather Image Restoration | [
"Pei Wang",
"Xiaotong Luo",
"Yuan Xie",
"Yanyun Qu"
] | cs.CV | [
"cs.CV"
] |
Purification of Gaussian States by Photon Subtraction
Mattia Walschaers
September 9, 2024
=====================================================
§ ABSTRACT
Multi-weather image restoration has witnessed incredible
progress, while the increasing model capacity and expensive data acquisition impair its applications in memory-limited devices.
Data-free distillation provides an alternative for allowing to learn a lightweight student model from a pre-trained teacher model without relying on the original training data.
The existing data-free learning methods mainly optimize the models with the pseudo data generated by GANs or the real data collected from the Internet.
However, they inevitably suffer from the problems of unstable training or domain shifts with the original data.
In this paper, we propose a novel Data-free Distillation with Degradation-prompt Diffusion framework for multi-weather Image Restoration (D4IR).
It replaces GANs with pre-trained diffusion models to avoid model collapse and incorporates a degradation-aware prompt adapter to facilitate content-driven conditional diffusion for generating domain-related images.
Specifically, a contrast-based degradation prompt adapter is firstly designed to capture degradation-aware prompts from web-collected degraded images.
Then, the collected unpaired clean images are perturbed to latent features of stable diffusion, and conditioned with the degradation-aware prompts to synthesize new domain-related degraded images for knowledge distillation.
Experiments illustrate that our proposal achieves comparable performance to the model distilled with original training data, and is even superior to other mainstream unsupervised methods.
§ INTRODUCTION
Multi-weather image restoration (MWIR) aims to recover a high-quality image from a degraded input (e.g., haze, rain), which can be used in autonomous driving, security monitoring, etc.
Nowadays, MWIR <cit.> has made significant progress relying on
the rapid development of computing hardware and the availability of massive data.
In actual scenarios, the increasing model complexity may impair its application on resource-constrained mobile vehicular devices.
As a widely used technique, Knowledge Distillation (KD) <cit.> is often adopted for model compression.
However, the original training data is unavailable for some reasons, e.g., transmission constraints or
privacy protection.
Meanwhile, due to the variability of weather conditions, access to large-scale and high-quality datasets containing all weather conditions can be both difficult and expensive.
Therefore, it is necessary to develop data-free learning methods to compress existing IR models for adapting to different edge devices and more robust to various adverse weather conditions.
Data-free knowledge distillation <cit.> paves such a way to obtain lightweight models without relying on the original training data.
Its core concern is how to acquire data similar to the training data.
The existing methods mainly achieve knowledge transfer by generating pseudo-data based on generative adversarial networks (GANs) <cit.> or collecting trust-worth data from the Internet <cit.>.
However, these methods mainly focus on high-level tasks, lacking sufficient exploration in low-level image restoration for pixel-wise dense prediction.
Recently, a few studies <cit.> have explored data-free learning for image restoration.
However, there are still two underlying limitations.
Firstly, they all adopt the GAN-based framework, which often faces unstable training and complex regularization hyperparameter tuning.
Secondly, they use pure noise as input to generate pseudo-data that generally lack clear semantic and texture information. It is crucial for low-level vision tasks.
Although collecting data from the Internet can avoid the problem, it would inevitably face domain shift from the original data, which is difficult to solve for MWIR unlike simple perturbations based on class data statistics <cit.> in image classification.
In order to mitigate the above issues, we advocate replacing GANs with a pre-trained conditional diffusion model and equipping it with degradation-aware prompts to generate domain-related images from content-related features.
On the one hand, the diffusion models can avoid mode collapse or training instability of GANs and are superior in covering the modes of distribution <cit.>.
On the other hand, by training on large-scale datasets, many conditional diffusion models (e.g., Stable Diffusion (SD) <cit.> ) demonstrate exceptional ability in creating images that closely resemble the content described in the prompts.
Especially, some methods <cit.> resort to the powerful prior of these pre-trained models and introduce trainable adapters to align the internal learned knowledge with external control signals for task-specific image generation.
In this paper, we propose a novel Data-free Distillation with Degradation-prompt Diffusion for multi-weather Image Restoration (D4IR).
As shown in Fig. <ref>, unlike previous GAN-based data-free learning methods <cit.> for MWIR,
our D4IR separately extracts degradation-aware and content-related feature representations from the unpaired web-collected images with conditional diffusion to better approach the source distribution.
It aims to shrink the domain shift between the web-collected data and the original training data.
Specifically, our D4IR includes three main components: degradation-aware prompt adapter (DPA), content-driven conditional diffusion (CCD), and pixel-wise knowledge distillation (PKD).
DPA and CCD are jointly utilized to generate degraded images close to the source data.
For DPA, a lightweight adapter is employed to extract degradation-aware prompts from web-collected low-quality images, which employs contrastive learning to effectively learn diverse degradation representations across different images.
For CCD, the encoded features of web-collected clean images are perturbed to latent samples by forward diffusion, and then conditioned with the degradation-aware prompts for synthesizing data near the source distribution under the degradation reversal of the teacher model.
With the newly generated images, the student network could be optimized to mimic the output of the teacher network through PKD.
Experiments illustrate that our proposal achieves comparable performance to distill with the original training data, and is even superior to other mainstream unsupervised methods.
In summary, the main contributions are four-fold:
* We propose a novel data-free distillation method for MWIR, which aims to break the restrictions on expensive model complexity and data availability.
* We design a contrast-based adapter to encode degradation-aware prompts from various degraded images, and then embed them into stable diffusion.
* We utilize the diffusion model to capture the latent content-aware representation from clean images, which combines the degradation-aware prompts to generate data that is more consistent with the source domain.
* Extensive experiments demonstrate that our method can achieve comparable performance to the results distilled with the original data and other unsupervised methods.
§ RELATED WORKS
§.§ Multi-weather Image Restoration
MWIR can be divided into single-task specific models for deraining <cit.>, dehazing <cit.>, desnowing <cit.>, and multi-task all-in-one IR models <cit.>.
Based on the physical and mathematical models, many MWIR methods <cit.> attempt to decouple degradation and content information from the training data.
For example, DA-CLIP <cit.> adapts the controller and fixed CLIP image encoder to predict high-quality feature embeddings for content and degradation information.
Recently, transformer-based models <cit.> have been introduced into low-level tasks to model long-range dependencies, significantly improving performance.
Restormer <cit.> designs a efficient multi-head attention and feed-forward network to capture global pixel interactions.
Though these methods have made powerful performance, the substantial storage space and computational resources make them challenging to deploy on resource-constrained edge devices.
Moreover, due to the difficulty in obtaining large-scale paired degraded-clean images, many methods use unpaired data to achieve unsupervised IR based on techniques like GANs <cit.>, contrastive learning <cit.>, etc.
Unlike these methods, our proposal combines disentanglement learning and stable diffusion to generate data closer to the source domain for KD.
§.§ Data-free Knowledge Distillation
Existing data-free distillation methods can be roughly classified into three types.
Firstly, the methods <cit.> reconstruct training samples in the distillation process with the “metadata" preserved during training. However, they are less feasible when only the pre-trained teacher model is accessible due to the necessity of “metadata".
Secondly, the methods <cit.> optimize GANs to generate data similar to the distribution of original training data by a series of task-specific losses.
DAFL <cit.> distills the student network by customizing one-hot loss, information entropy loss, and activation loss based on classification features.
DFSR <cit.> introduces data-free distillation to image SR and designs the reconstruction loss with bicubic downsampling to achieve performance comparable to the student network trained with the original data.
DFMC <cit.> adopts a contrastive regularization constraint to further improve model representation based on DFSR for MWIR.
The last methods <cit.> optimize with web-collected data and try to address the distribution shift between collected data and original training data. KD3 <cit.> selects trustworthy instances based on classification predictions and learning the distribution-invariant representation.
§.§ Conditional Diffusion Models
To achieve flexible and controllable generation, conditional diffusion methods combine the auxiliary information (e.g., text <cit.>, image <cit.>, etc.) to generate specific images.
In particular, Stable Diffusion (SD) <cit.> successfully integrates the text CLIP <cit.> into latent diffusion.
Given the efficiency of foundation models such as SD, most recent methods <cit.> resort to their powerful prior and introduce trainable prompts to encode different types of conditions as guidance information.
For example, T2I-Adapter <cit.> enables rich controllability in the color and structure of the generated results by training lightweight adapters to align the internal knowledge with external control signals according to different conditions.
Diff-Plugin <cit.> designs a lightweight task plugin with dual branches for a variety of low-level tasks, guiding the diffusion process for preserving image content while providing task-specific priors.
§ PROPOSED METHOD
§.§ Preliminary
Notation and Formulation.
Formally, given the pre-trained teacher network N_T(·), knowledge distillation (KD) aims to learn a lightweight student network N_S(·) by minimizing the model discrepancy dis(N_T, N_S). With the original training data D = {(x_i, y_i)}_i=1^|D| (“|·|" is the data cardinality, x_i and y_i are the degraded image and clean image), traditional KD is usually achieved by minimizing the following loss:
L_kd(N_S) = 1/|D|∑_i=1^|D|[∥ N_T(x_i) - N_S(x_i)∥_2]
Problem Definition.
In practice, the original training data D may be inaccessible due to transmission or privacy limitations, which hinders efficient model training.
That means only the pre-trained teacher model is available.
Therefore, our D4IR aims to address two significant issues for data-free KD:
(1) how to capture the data for model optimization; (2) how to achieve effective knowledge transfer.
Technically, data-free KD methods simulate D with generated pseudo-data or web-collected data.
To efficiently synthesize the domain-related images to the original degraded data for MWIR, we first analyze the mathematical and physical models <cit.> used in traditional IR method.
The general formulation of the degraded image Y is assumed to be obtained by convolving a clean image X with a fuzzy kernel B and further adding noise n as follows:
Y = X * B + n
where * denotes convolution operation. Inspired by disentangled learning <cit.>, we consider decoupling the low-quality images as degradation-aware (B, n) and content-related information (X) from web-collected degraded images D̅_X = {x̅_i }_i=1^|D̅_X| and unpaired clean images D̅_Y ={y̅_i}_i=1^|D̅_Y| to facilitate the pre-trained SD model to generate source domain-related degraded images.
§.§ Method Overview
As illustrated in Fig. <ref>, our method consists of three main components: degradation-aware prompt adapter (DPA), content-driven conditional diffusion (CCD), and pixel-wise knowledge distillation (PKD). These parts are collaboratively worked to generate data close to the source domain so as to achieve data-free distillation of MWIR.
First, DPA includes a lightweight learnable encoder Enc_DP, which is used to extract degradation-aware prompts Enc_DP(x̅) from the collected degraded images x̅.
To learn task-specific and image-specific degradation representations across various images, Enc_DP is trained with contrastive learning <cit.>, i.e., the features of patches from the same image (q, k^+) are pulled closer to each other and pushed away from ones of other images (k_i^-).
Then, CCD performs the diffusion process from the perturbed latent features z_T^' of the collected clean images y̅, which is designed to relieve the style shift between the original data and the images generated by frozen stable diffusion <cit.> starting from random noise.
Moreover, z_T^' is conditioned with the degradation-aware prompts Enc_DP(x̅) for synthesizing new domain-related images x̂.
Finally, PKD is conducted with the generated images x̂. Without loss of generality, the student network is optimized with a pixel-wise loss L_kd between its output N_S(x̂) and the one of teacher network N_T(x̂).
Note that L_kd is utilized to simultaneously optimize N_S(·) and Enc_DP.
It aims to filter the degradation types domain-related to the original data from large-scale collected images for contributing to KD.
§.§ Degradation-aware Prompt Adapter
As previously discussed, the degradation-aware prompt adapter (DPA) aims to extract the degradation representations that help the student network learn from the teacher network with web-collected low-quality images.
To achieve this, the adapter needs to satisfy the following conditions.
First, DPA expects to effectively learn diverse degradation representations across different images while focusing on the task-specific and image-specific degradation information that distinguishes it from other images for the input image.
Therefore, we adopt contrastive learning <cit.> to optimize DPA to pull in the same degradation features and push away irrelevant features.
Specifically, we randomly crop two patches x̅_q and x̅_k^+ from the collected degraded image x̅, which are considered to contain the same degradation information.
Then, they are passed to a lightweight encoder Enc_DP with three residual blocks and a multi-layer perceptron layer to obtain the corresponding features q=Enc_DP(x̅_q) and k^+=Enc_DP(x̅_k^+). We treat q and k^+ as query and positive samples.
On the contrary, the features k_i^-=Enc_DP(x̅_k_i^-) of the patches x̅_k_i^- cropped from other images are viewed as negative samples. All negative sample features are stored in a dynamically updated queue of feature vectors from adjacent training batches following MoCo <cit.>. Thus, the contrastive loss L_cl can be expressed as:
L_cl(Enc_DP) = -logexp( q · k^+ / τ)/∑_i=1^K exp( q · k_i^- / τ)
where τ is a temperature hyper-parameter set as 0.07 <cit.> and K denotes the number of negative samples.
Second, DPA needs to extract domain-related prompts to guide the diffusion model in synthesizing images that facilitate knowledge transfer. If we only use Eq. (<ref>) to optimize Enc_DP, the resulting prompts may overlook the degradation differences between the web-collected data and the original training data.
This implies that DPA might only capture degradation features across different input images, leading to a distribution shift from the original data.
To address this, we employ the distillation loss L_kd between the outputs of the student model and teacher model to simultaneously optimize the degradation prompt encoder and the student model.
Replacing the text prompt encoder in the pre-trained SD model, we employ the DPA to align the internal knowledge prior with external encoded degradation-aware prompts by the cross-attention module <cit.> for generating images toward specific degradation-related images:
Attention(Q,K,V) = softmax(QK^T/√(d))· V
Q, K, and V projections are calculated as follows:
Q = W_Q^(i)·φ_i(z_t), K = W_K^(i)· Enc_DP(x̅),
V = W_V^(i) · Enc_DP(x̅)
where φ_i(z_t) denotes the intermediate representation of the UNet in SD. W_Q^(i), W_K^(i), and W_V^(i) are projection matrices frozen in SD. d is the scaling factor <cit.>.
§.§ Content-driven Conditional Diffusion
According to the degradation prompts, the diffusion models still cannot generate domain-related images. This is because they inevitably suffer from the content and style differences against the real images without specifying the content of the images.
Therefore, it is necessary to address the content shift from the original degraded data while preserving the realism of the collected images.
Inspired by SDEdit <cit.>, we choose the noised latent features z_T^' encoded from the collected clean image y̅ instead of the random noise to synthesize domain-related images with realism. Specifically, we first encode the web-collected clean images y̅ into latent representations z_0 by the encoder Enc_SD frozen in SD via z_0 = Enc_SD(y̅).
Then, we replace the initial random Gaussian noise with the T^'-step noised features z_T^' of the latent features z_0 as the input to the diffusion model:
z_t = √(α̅_t)z_0 + √(1-α̅_t)ϵ_t, t=T^'
where α̅_t is the pre-defined schedule variable <cit.>, ϵ_t ∼ N(0,1) is the random noise, T^' = λ * T, T is the total number of sampling steps in the diffusion model, and λ∈ [0,1] is a hyper-parameter indicating the degree of injected noise.
With the learned conditional denoising autoencoder ϵ_θ, the pre-trained SD can gradually denoise z_T^' to z_0 conditioned with the degradation-aware prompts Enc_DP(x̅) via
z_t-1 = √(α̅_t-1)(z_t-√(1-α̅_t)ϵ_θ(z_t, t , Enc_DP(x̅))/√(α̅_t))
+ √(1-α̅_t)·ϵ_θ(z_t, t , Enc_DP(x̅))
Finally, the decoder Dec_SD reconstructs the image x̂ from the denoised latent feature z_0 as x̂ = Dec_SD(z_0).
As the noised input z_T^' to the diffusion model retains certain features of the real image y̅, the generated image x̂ closely aligns in style with the real image. More importantly, by starting from the partially noised features of the collected clean images, the pre-trained SD model can generate images D̂ = {(x̂_i)}_i=1^|D̂| that reflect the content and degradation characteristics of the original training data, when conditioned with degradation-aware prompts Enc_DP(x̅).
§.§ Pixel-wise Knowledge Distillation
Considering that image restoration focuses on pixel-level detail in an image, we calculate the distillation loss L_kd by the pixel-wise distance between the outputs of the student network and the teacher network as:
L_kd(N_S, Enc_DP) = 1/|D̂|∑_i=1^|D̂|[∥ N_T(x̂_i) - N_S(x̂_i)∥_2]
where x̂_̂î denotes the synthesized images. For better generalization, we simply provide a simple way to conduct distillation, and other KD losses are also encouraged.
Note that the distillation loss is used to optimize both the student network and the degradation prompt adapter. Therefore, the whole objective function is formulated as:
L(N_S, Enc_DP) = L_kd(N_S, Enc_DP) + γ· L_cl(Enc_DP)
where γ is a regularization coefficient to balance the distillation loss and the contrastive loss.
§ EXPERIMENTS
§.§ Experimental Settings
Datasets. Following the previous work in high-level tasks <cit.>, we introduce the web-collected data to synthesize data near the original distribution. Specifically, our datasets are as follows:
1) Original Training Datasets: Here, we mainly consider the common weather following the representative AirNet <cit.>. The teacher networks are trained on Rain100L <cit.> for deraining, the Outdoor Training Set (OTS) <cit.> for dehazing, and Snow100K <cit.> for desnowing.
2) Web-Collected Datasets:
For image draining, we employ the training images from the large-scale deraining dataset Rain1400 <cit.> with 12,600 rainy-clean image pairs.
For image dehazing, we adopt the training images from RESIDE <cit.> with 72,135 outdoor and 13,990 indoor hazy-clean image pairs.
For image desnowing, we set the training images from the Comprehensive Snow Dataset (CSD) <cit.> with 8,000 snowy-clean image pairs.
Note that the paired images are randomly shuffled during training to reach an unpaired configuration.
3) Test Datasets:
Following the common test setting for different weather image restoration, we adopt Rain100L <cit.>, Synthetic Objective Testing Set (SOTS) <cit.>, and the test datasets of Snow100K for image deraining, dehazing and desnowing, respectively.
Implementation Details.
We employ the pre-trained AirNet as the teacher network and then halve the number of feature channels to obtain the student network.
The initial learning rates of the student network N_S(·) and the degradation prompt encoder Enc_DP are set as 1×10^-3 and 1×10^-5, respectively, which are decayed by half every 15 epoch.
Adam optimizer is used to train D4IR with β_1=0.9 and β_2=0.999.
The specific sampling step of the latent diffusion <cit.> is 70.
During training, the input RGB images are randomly cropped into 256×256 patches and the batch size is set following AirNet.
To ensure the training stability, we first train N_S(·) and Enc_DP together as Eq. (<ref>) for 50 epochs, and then with the distillation loss as Eq. (<ref>) for 150 epochs.
Besides, the hyperparameter λ in Eq. (<ref>) and the trade-off parameter γ in Eq. (<ref>) are set as 0.5 and 0.5, respectively (the analysis is shown in the supplementary material).
All experiments are conducted in PyTorch on NVIDIA GeForce RTX 3090 GPUs.
Evaluation Metrics. Peak signal-to-noise ratio (PSNR) <cit.> and structural similarity (SSIM) <cit.> are utilized to evalute the performance of our method.
Besides, the parameters are used to evaluate model efficiency.
§.§ Comparisons with the State-of-the-art
To validate the effectiveness of our D4IR, we provide quantitative and qualitative comparisons for image deraining, dehazing, and desnowing.
Here, we mainly compare our D4IR with four kinds of methods:
1) directly train the student network with the original training data of the teacher network (Student).
2) distill the student network with the original degraded data without the GT supervision (Data).
3) distill the student network by DFSR <cit.> and DFMC <cit.>.
Other data-free distillation methods are designed for high-level vision tasks, which cannot be applied to IR for comparison.
4) the mainstream unsupervised methods that are trained on unpaired data.
For Image Deraining.
As shown in Tab. <ref>, it is observed that the performance of the student network obtained by our D4IR for image deraining improves by 0.91dB on PSNR and 0.023 on SSIM compared to “Data".
This benefits from the wider range of data synthesized by our D4IR, which is domain-related to the original degraded data so as to facilitate the student network to focus on the knowledge of the teacher network more comprehensively.
Besides, the performance of our D4IR also far exceeds that of the GAN-based DFSR and performs better than DFMC (0.44dB and 0.024 higher on PSNR and SSIM).
Moreover, D4IR also performs better than most mainstream unsupervised image deraining methods and achieves comparable performance with Mask-DerainGAN with only the half parameters.
The visual comparisons in Fig. <ref> show that D4IR achieves a significant rain removal effect and is better than DFMC, DFSR, and students distilled with original data for removing rain marks.
For Image Dehazing.
As shown in Tab. <ref>, our D4IR also outperforms the student distilled with the original degraded data (0.04dB higher on PSNR and 0.001 higher on SSIM) and performs much better than DFSR and DFMC, which lack specific degradation-related losses.
Besides, compared to the popular unsupervised image dehazing methods, D4IR has a much smaller number of parameters in second place on PSNR and SSIM.
The visual result is given in Fig. <ref>. It shows that our D4IR has a significant dehazing effect and is closer to the GT than DFMC, DFSR, and “Data".
In Fig. <ref>, we present visualized samples synthesized by DFMC, the pre-trained SD model, and our D4IR for image dehazing. The results indicate that GAN-based DFMC, which initiates from pure noise, struggles to produce images with semantic information. Additionally, generating images with rich texture and color details using simple textual prompts proves challenging for SD. In contrast, our D4IR method generates images with more detailed texture and semantic information compared to both DFMC and SD.
The results for image desnowing are in the supplement.
§.§ Ablation Studies
Here, we mainly conduct the ablation experiments on the image deraining task as follows:
Break-down Ablation. We analyze the effect of the degradation-aware prompt adapter (DPA) and content-driven conditional diffusion (CCD) by setting different input z_0 (noise and CCD) and prompts (none, textual features same as SD, content features encoded from clean images, and DPA) for frozen SD model in Tab. <ref>.
It is observed that the performance of M1 is slightly better than that of M2 since the “text-to-image" generative model is powerful in generating images with original textual prompts.
Besides, the degradation-aware prompts can not work well without content-related information (M2) for the absence of content information compared with M3.
Both textual degradation prompts (M5) and our proposed DPA (D4IR) effectively improve student models' performance compared with none prompts (M4).
Our D4IR performs the best by jointly utilizing DPA and CCD to generate images close to the original degraded data. It improves 1.65dB on PSNR compared with the model relying solely on the pre-trained SD model (M1) and 1.34dB on PSNR compared with the model directly distilled with the web-collected data (M0).
Real-world Dataset.
For further general evaluation in practical use, we conducted experiments on the real-world rainy dataset SPA <cit.>.
As shown in Tab. <ref>, our D4IR also has comparable performance with the student distilled with original data in real-world scenarios (0.08dB higher on PSNR).
More comparisons with other unsupervised methods are presented in the supplementary material.
Different Backbones of Teacher Network.
We also validate D4IR with a transformer-based teacher backbone Restormer <cit.> on Rain100L. Due to resource constraints, we use Restormer with halved feature channels (from 48 to 24) as our teacher network and a quarter of feature channels (from 48 to 12) as the student network.
As shown in Tab. <ref>, the shrunk model capacity also leads to a large performance loss of the student network compared to the teacher network.
Besides, it is observed that the performance of our D4IR is slightly lower than that of the student network distilled with the original degraded data.
The reason lies in that the images generated by the diffusion model still differ from the real training data while the self-attention mechanism of the transformer pays more attention to the global contextual information of the images.
§ CONCLUSION
This paper proposes a simple yet effective data-free distillation method with degradation-aware diffusion for MWIR.
To achieve this, we mainly consider three concerns, including:
1) investigate the application of the conditional diffusion model to solve the unstable training of the traditional GANs in data-free learning;
2) introduce a contrast-based prompt adapter to extract degradation-aware prompts from collected degraded images;
and 3) start diffusion generation from content-related features of collected unpaired clean images.
Extensive experiments show that our D4IR obtains reliable student networks without original data by effectively handling the distribution shifts of degradation and content.
In future work, we will continue to study more effective prompt generation to enable efficient model learning.
|
http://arxiv.org/abs/2409.03206v1 | 20240905025417 | TC-LLaVA: Rethinking the Transfer from Image to Video Understanding with Temporal Considerations | [
"Mingze Gao",
"Jingyu Liu",
"Mingda Li",
"Jiangtao Xie",
"Qingbin Liu",
"Bo Zhao",
"Xi Chen",
"Hui Xiong"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
TC-LLaVA: Rethinking the Transfer from Image to Video Understanding with Temporal Considerations
Mingze Gao^1,2,3,† Jingyu Liu^2 Mingda Li^2 Jiangtao Xie^4 Qingbin Liu^2
Bo Zhao^2 Xi Chen^2 Hui Xiong^1,3, *
^1 The Hong Kong University of Science and Technology (Guangzhou), China
^2Tencent PCG
^3The Hong Kong University of Science and Technology, China
^4 Dalian University of Technology, China
Received 16 July 2024; accepted 04 September 2024
=================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
^† This work was done when Mingze Gao was an intern at Tencent PCG. ^* Corresponding authors
Multimodal Large Language Models (MLLMs) have significantly improved performance across various image-language applications. Recently, there has been a growing interest in adapting image pre-trained MLLMs for video-related tasks. However, most efforts concentrate on enhancing the vision encoder and projector components, while the core part, Large Language Models (LLMs), remains comparatively under-explored. In this paper, we propose two strategies to enhance the model's capability in video understanding tasks by improving inter-layer attention computation in LLMs. Specifically, the first approach focuses on the enhancement of Rotary Position Embedding (RoPE) with Temporal-Aware Dual RoPE, which introduces temporal position information to strengthen the MLLM's temporal modeling capabilities while preserving the relative position relationships of both visual and text tokens. The second approach involves enhancing the Attention Mask with the Frame-wise Block Causal Attention Mask, a simple yet effective method that broadens visual token interactions within and across video frames while maintaining the causal inference mechanism. Based on these proposed methods, we adapt LLaVA for video understanding tasks, naming it Temporal-Considered LLaVA (TC-LLaVA). Our TC-LLaVA achieves new state-of-the-art performance across various video understanding benchmarks with only supervised fine-tuning (SFT) on video-related datasets.
§ INTRODUCTION
By leveraging vast open-source and AI-generated datasets <cit.>, along with the impressive development of large language models such as GPT <cit.>, LLaMA <cit.>, and GLM <cit.>, Multimodal Large Language Models (MLLMs) have demonstrated remarkable proficiency in image comprehension tasks <cit.>. Given the powerful capabilities of image-pretrained MLLMs, a recently emerging research focus is on transferring these models from single-image tasks to video understanding.
Recently, various approaches <cit.> have tended to treat a video as a series of concatenated frames in the spatial dimension, thereby transferring video-related tasks back to image-related tasks. However, these methods face two issues as they treat text and visual tokens as the same modality and fed them into the LLMs as a unified input. Firstly, utilizing LLMs' vanilla attention mechanism to uniformly process all tokens overlooks the distinct interactions between visual tokens within individual video frames and those across different frames. Secondly, it neglects the temporal information inherent in the video input, which is crucial for video understanding tasks. Consequently, the constructed video MLLM fails to effectively summarize the dynamic events occurring within videos, reducing the analysis to single frames as if they were still images. For instance, it fails to adequately capture and detail the complex motion changes of the primary subject in the video, particularly in activities such as dancing or gymnastics. This deficiency ultimately results in inaccurate or 'hallucinatory' responses by the model, as depicted in Figure <ref>.
In this paper, we propose Temporal-Considered (TC) LLaVA, a novel video-language framework designed to address the aforementioned issues. The primary innovation is to enhance the temporal awareness of MLLMs and distinguish the attention interactions between text and video modalities through two core strategies. First, we introduce Temporal-Aware Dual RoPE, which assigns each token an independent position id with the original RoPE to preserve global relative positional relationships, while incorporating temporal-aware RoPE to assign the same position id to visual tokens within the same frame and to encode inter-frame relationships to capture the temporal dynamics of videos, as shown in Figure <ref>. Additionally, we design three different attention masks to optimize token interaction strategies in attention computation, accounting for the distinct characteristics of visual and text tokens. Finally, we select the Frame-wise Block Causal Attention Mask to replace the original causal attention mask, enhancing interaction between visual tokens within and across frames while preserving the causal reasoning paradigm, making it more suitable for causal language model inference.
To verify the effectiveness of our TC-LLaVA, we evaluate the model on extensive video benchmarks, including MSVD <cit.>, MSRVTT <cit.>, ActivityNet <cit.>, TGIF <cit.>, VCGbench <cit.> and MVbench <cit.>. Comparing with the latest video MLLMs, TC-LLaVA achieves new state-of-the-art performance on these benchmarks at the same model scales, demonstrating the benefits of enhancing visual token interactions within and across frames, as well as the importance of incorporating temporal information in video analysis.
§ RELATED WORK
§.§ Attention in Vision and Language Models
The introduction and evolution of the attention mechanism have significantly enhanced model performance in natural language processing (NLP) and computer vision (CV). The earliest attention mechanism by <cit.> allowed machine translation models to assign different weights to input sentence parts, improving translation accuracy. <cit.> introduced the Transformer model, which uses a self-attention mechanism to enable parallel processing and superior long-range dependency modeling, achieving significant results in multiple NLP tasks. To further optimize the attention computation, <cit.> propose Relative Position Encoding (RPE) to improve the token interaction by introducing extra position information. Recently, Rotary Position Embedding (RoPE) <cit.> is designed for the interaction limitation of RPE by leveraging complex number rotations. In CV, attention mechanisms have proven effective with models like Non-local Neural Networks by <cit.> and Vision Transformer (ViT) <cit.>, which first applies the Transformer architecture to image classification tasks. There have also been numerous advancements <cit.> in attention mechanisms that continually improve the performance of Transformer-based models, enhancing their ability to capture essential features and increasing computational efficiency. Our work continues to delve deeply into improving attention computation in the multimodal domain of video and text, and we propose the TC-Attention method to achieve this goal.
§.§ Video Multimodal Large Language Models
Video Multimodal Large Language Models (Video MLLMs) operate by aligning modalities and performing instruction fine-tuning on video data, enabling them to generate responses based on user instructions and input video streams. Recently, Video MLLMs have experienced rapid development. One significant milestone in this field is BLIP2 <cit.>, which integrates a frozen vision encoder with a Q-Former to enhance video processing efficiency, demonstrating remarkable zero-shot capabilities in Video Question Answering (VQA) and outperforming existing techniques. Video-ChatGPT <cit.> introduced video instruction tuning and created a high-quality instructional dataset, setting a new standard for video-based text generation benchmarks. VideoChat <cit.> employed cross-attention mechanisms to condense visual tokens and align user queries with the dialogue context, enhancing interpretative capabilities. Building on this, VideoChat2 <cit.> refined the approach with a multi-stage bootstrapping technique focused on modality alignment and instruction tuning, utilizing a robust collection of high-quality video data. Chat-UniVi <cit.> processes longer videos by introducing a method for compressing tokens in both the spatial and temporal dimensions. LLaMA-VID <cit.> introduced an innovative dual-token approach that effectively condenses video representations by segregating context and content tokens, allowing for more efficient compression. VideoLLaMA and VideoLLaMA2 <cit.> enhances video understanding by incorporating audio modality information and utilizing a Spatial-Temporal Convolution (STC) connector. ST-LLM <cit.> intruduce a dynamic masking strategy into MLLM. PLLaVA <cit.> explore the Image-pretrained LLaVA into video tasks with simple spatial pooling. In this paper, we introduce TC-LLaVA, which considers the differences in visual token interactions within and across frames, and directly incorporates temporal position into the causal attention computation to enhance the understanding of model.
§ METHOD
§.§ Preliminary: Introducing Position Embeddings
While Relative Position Encoding (RPE) <cit.> incorporates relative positional information into the attention mechanism through a position bias element-addition computation with inter-layer attention map, this approach may limit interaction with attention weights and, consequently, hinder the effective utilization of relative positions. To address this limitation, RoFormer <cit.> introduces RoPE, a novel method that more effectively incorporates relative positional information by leveraging complex number rotations.
Specifically, when computing the attention map, the RoPE (Rotary Positional Encoding) technique introduces the multiplication of Euler's formula e^iθ to the query and key vectors as a relative position embedding. For instance, when considering the n-th and m-th query and key vectors q_n and k_m in ℝ^1 × d_head, RoPE is applied as follows:
𝐪'_n = 𝐪_n e^i n θ, 𝐤'_m = 𝐤_m e^i m θ.
Then, the (n, m)-th component of the attention matrix is calculated as:
A_(n,m) = Re[𝐪'_n 𝐤'_m^*] = Re[𝐪_n 𝐤_m^* e^i (n-m) θ],
where Re[·] denotes the real part of a complex number and ^* denotes the complex conjugate. By multiplying complex rotations e^i θ n, e^i θ m depending on token position (n, m), RoPE injects relative positions (n - m) into the attention matrix in a rotational form. In practical implementation, RoPE <cit.> converts the vectors q_n and k_m from ℝ^1 × d_head to complex vectors q̅_n and k̅_m in ℂ^1 × (d_head / 2). This is achieved by treating the (2t)-th dimension as the real part and the (2t + 1)-th dimension as the imaginary part, where t ∈0, 1, …, d_head / 2. This method results in the same attention values as 𝐪_n 𝐤_m^T = Re[𝐪̅_n 𝐤̅_m^*] while reducing computational overhead. Additionally, RoPE employs multiple frequencies θ_t through the channel dimensions of the query and key vectors as follows:
θ_t = 10000^-t / (d_head / 2),
This approach allows for more effective integration of relative positional information within the attention mechanism, enhancing the model's capability to process and understand sequential data.
§.§ Temporal-Aware Dual RoPE
In the RoPE used by most current video-language large language models, the relative distance between the m-th text token T_m and the z-th visual token in the n-th frame F_n V_z is defined as Eqn <ref>. Each text and visual token is treated as an independent position and assigned a unique position id for embedding. However, this position embedding method fails to distinguish visual tokens within and across different video frames, thereby neglecting the crucial temporal information necessary for effective video understanding tasks. Furthermore, as visual tokens constitute a significant proportion of the total tokens in video understanding tasks, the relative distance P(T_m) - P(F_n V_z) between the generated text tokens and the visual tokens may become substantial. This increased distance can impair the model's ability to fully comprehend the visual information, leading to "hallucinated" responses <cit.>.
A_(q_T_m, k_F_n V_z) = Re[𝐪_T_m𝐤_F_n V_z e^i (P(T_m) - P(F_n V_z)) θ],
To address this limitation, we propose a Temporal-Aware Dual Rotary Positional Embedding (TAD-PoPE). It includes one RoPE that retains the global relative position relationships of the visual and textual tokens, and an additional time-aware RoPE to incorporate temporal information pertinent to the video frames. Specifically, in contrast to the original position ids, the additional RoPE ensures that visual tokens within the same video frame share the same position id. Meanwhile, the temporal order is maintained across different frames, with the position ids incrementing accordingly. The proposed temporal position id is defined as follows:
𝐈_t(n) =
n, if n < v_s,
v_s + ⌊n - v_s/m⌋, if v_s≤ n ≤ v_e,
n - (v_e - v_s + 1 - ⌊v_e - v_s/m⌋), if n > v_e.
where v_s and v_e are the starting and ending position ids of the visual tokens within the global RoPE position id n. m is the number of visual tokens per frame, and ⌊ . ⌋ denotes the floor function, which rounds down to the nearest integer. By scaling the position ids, temporal information is introduced through the adjusted position n̂, defined as:
n̂ = n + γ·𝐈_t(n),
where γ is a scaling factor of constant magnitude. This adjustment ensures that temporal information is effectively incorporated into the original position embedding. For both text and visual tokens, the query and key vectors are updated using the adjusted positions n̂ and m̂:
𝐪'_n = 𝐪_n e^i n̂θ = 𝐪_n e^i (n + γ·𝐈_t(n)) θ ,
𝐤'_m = 𝐤_m e^i m̂θ = 𝐤_m e^i (m + γ·𝐈_t(m)) θ ,
Finally, the attention matrix is calculated as follows:
A_(n̂, m̂) = Re[𝐪'_n𝐤'_m^*]
= Re[𝐪_n e^i (n + γ·𝐈_t(n)) θ𝐤_m^* e^i (m + γ·𝐈_t(m)) θ]
= Re[𝐪_n 𝐤_m^* e^i [(n-m) + γ (𝐈_t(n) - 𝐈_t(m))] θ]
This formula combines the updated query and key vectors to compute the attention map, incorporating both global positional and temporal information from video frames. By leveraging these aspects, we enhances the MLLM's ability to process and understand the input video comprehensively, resulting in more accurate and contextually appropriate responses.
§.§ Frame-wise Block Causal Attention Mask
Another often overlooked key point is the design of attention masks within the transformer layers in large language models. In causal language models like the GPT <cit.> and LLama <cit.> series, causal attention masks are employed to ensure that during text aggressive generation, historical token information is not leaked; that is, subsequent tokens can "see" preceding tokens, but preceding tokens cannot "see" subsequent tokens. This design is uniformly applied in such generative models to maintain the unidirectional flow of information, which is crucial for generating coherent and contextually appropriate text.
Mathematically, the causal attention mask M ∈ℝ^T × T for a sequence of length T is defined as:
M_ij =
0 if i ≥ j,
-∞ if i < j.
This ensures that each position i only attends to previous positions (including itself), thus implementing the causal attention mechanism. The final attention weights are computed as:
Attention(Q, K, V) = softmax(QK^T/√(d_k) + M)V,
where Q is the query vectors, K is the key vectors, V is the value vectors, d_k is the dimension of the key vectors, and M is the causal attention mask.
However, for multimodal information involving both visual and textual inputs, the visual modality is only used as a conditional input to the language model. During the unidirectional decoding process of the language model, this design weakens the bidirectional attention interactions obtained from the visual encoder, reducing them to unidirectional attention interactions. To explore the impact of different attention masks, we design three distinct attention masks to enhance and investigate better interactions within visual tokens and between visual and text tokens, as illustrated in Figure <ref>.
Firstly, the Full Visual Mask modifies the causal attention mask to enable more extensive interactions among visual tokens across different frames. This mask can be represented as follows:
M_ij^Full Visual =
0 if i ≥ j or i,j are visual tokens,
-∞ otherwise.
Secondly is Frame-wise Block Mask, which limits the attention to adjacent visual tokens within the same frame. This is defined as follows:
M_ij^Fw Block =
0 if i ≥ j and i, j within the same frame,
-∞ otherwise.
Finally, we proposed Frame-wise Block Causal Attention Mask (FwBC), which combines the characteristics of the previous causal and block visual attention masks by incorporating broader visual token interactions within the frame while maintaining causal inference mode across video frames. This can be presented as:
M_ij^FwBC =
0 if i ≥ j or i, j within the same frame,
-∞ otherwise.
By adjusting these masks, we aim to achieve a better balance between visual and textual information integration, enabling MLLMs to distinguish and process both video and text modalities more effectively while enhancing the spatiotemporal global attention to the most critical visual modality information for video understanding tasks. Finally, we utilized ablation experiments to select the Frame-wise Block causal Attention Mask for constructing TC-LLaVA.
§ EXPERIMENTS
§.§ Experimental Setup
Instruction Tuning Datasets. In alignment with the instruction tuning setting outlined in VideoChat2 <cit.>, which integrates data for a variety of video understanding tasks, we utilized an extensive and diverse collection of datasets. Specifically, these include 27k conversation videos from VideoChat <cit.> and Video-ChatGPT <cit.>, 80k classification task samples from Kinetics <cit.> and SthSthV2 <cit.>, 450k captioned data from Webvid <cit.>, YouCook2 <cit.>, TextVR <cit.>, and VideoChat, 117 reasoning data samples from NextQA <cit.>, CLEVRER <cit.>, and 109,000 annotated question answering samples from Webvid, TGIF <cit.>, and Ego4D <cit.>. In total, we employed 783k video instruction data samples for conducting supervised finetuning (SFT) our TC-LLaVA.
Evaluation Benchmarks. The performance of our trained TC-LLaVA model is assessed using a series of video understanding benchmarks, specifically targeting open-ended Video Question Answer (VideoQA) tasks. These benchmarks include MSVD-QA <cit.>, MSRVTT-QA <cit.>, Activity-QA <cit.>, and TGIF-QA <cit.>, where responses generally consist of single-word answers. The accuracy (with true/false answers) and quality (scored from 0 to 5) of the models' responses are evaluated using GPT-3.5 <cit.>. Moreover, we employ the Video-based Generative Performance benchmark (VCG Score), as introduced by VideoChatGPT <cit.>. This benchmark requires longer answers and evaluates five key aspects of video understanding: Correctness of Information (CI), Detail Orientation (DO), Context Understanding (CU), Temporal Understanding (TU), and Consistency (CO). The generative performance is also assessed using the GPT-3.5 model. In addition, we evaluate TC-LLaVA on the multi-choice Question Answering benchmark, MVBench <cit.>, which consists of 20 tasks that demand nuanced temporal comprehension of videos.
Implementation Details Initialized from the image-pretrained MLLM LLaVA-Next <cit.>, which is based on the Vicuna-7B-v1.5 <cit.>, our TC-LLaVA 7B conduct further video instruction supervised finetuning (SFT) and evaluation on the datasets mentioned above. Following the experimental settings in <cit.>, we uniformly sample 16 frames from the raw video as input and use global average pooling to downsample the visual features from a shape of 24*24*d to 12*12*d where d represents the input feature dimension of the LLM part. During the SFT stage, we employ a batch size of 128 and a learning rate of 2e-5, utilizing a cosine scheduler and a warmup ratio of 0.03. All reported results are evaluated on models trained for 7k steps on 8 NVIDIA A100 GPU. For evaluation, we use the GPT-3.5-turbo-0125 model across benchmarks that require additional scoring or assessment.
§.§ Comparison with SOTA
In this section, we compare our TC-LLaVA with recent advanced works, including Video-LLaMA <cit.>, LLaMA-Adapter <cit.>, Video-ChatGPT <cit.>, Chat-UniVi <cit.>, MovieChat <cit.>, VideoChat <cit.>, VideoChat2 <cit.>, Vista-LLaMA <cit.>, LLaMA-VID <cit.>, IG-VLM LLaVA <cit.>, ST-LLM <cit.>, PLLaVA <cit.>, and GPT-4V <cit.>, across various video understanding benchmarks. The best performance is indicated in bold, and the second-best results are indicated with underlining. As shown in Table <ref>, our TC-LLaVA achieves a new state-of-the-art performance across MSVD-QA, TGIF-QA, and Video-ChatGPT, surpassing GPT-4V by 2.5%, 7.9%, and 0.02%, respectively. Additionally, our TC-LLaVA achieves the best performance across video question-answering benchmarks on the Score metric. Compared to the latest work PLLaVA, which is also initialized from LLaVA-Next and continues using original causal attention mask and RoPE, our TC-LLaVA outperforms it across all five evaluation benchmarks, demonstrating the effectiveness of our proposed methods.
Furthermore, we evaluate TC-LLaVA on MVbench, a multiple-choice video question answering benchmark, focusing on questions that require comprehensive understanding of the entire video. As shown in Table <ref>, TC-LLaVA achieves state-of-the-art performance in the average MVbench score. Specifically, for time-related tasks such as Action Sequence (AS), Object Existence (OE), Moving Count (MC), Moving Attribute (MA), State Change (SC), Character Order (CO), and Counterfactual Inference (CI), TC-LLaVA demonstrates a significant performance margin of at least 0.5% over other open-source models. Even when compared to GPT-4V, we maintain an edge in average performance across all 20 tasks by 13.1%.
§.§ Ablation Studies
In this subsection, we conduct ablation studies to assess the impact of key components. Specifically, we examine the manual ratio settings γ of Time-Aware Dual RoPE and other designs of the attention mask, beyond the proposed Frame-wise Block Causal Mask as shown in Figure <ref>. For these studies, we use the basic settings as a combination of both the original RoPE and Causal Attention Mask, while keeping the previously mentioned training settings. The evaluation is performed on MVbench. Finally, we present a visualized heatmap comparing the attention weights of our TC-Attention mechanism to the vanilla attention.
§.§.§ Time-Aware RoPE Ablation
Firstly, Maintaining global Rotary Position Embedding (RoPE) is crucial for preserving the global positional relationships between tokens. LLaVA treated each token in an image as having an independent position. When transitioning from image to video understanding tasks, we aim to retain the characteristics of these pre-trained weights while introducing time-aware RoPE to incorporate temporal information. If we entirely abandon the use of RoPE, it could result in a partial loss of the capabilities encoded in the pre-trained LLMs, ultimately affecting the final performance.
Secondly, RoPE employs a rotational invariant mechanism, which contrasts with the linear and fixed positional embedding schemes of absolute and learnable embeddings. These inherent differences can hinder RoPE's effective scalability when integrating it with other positional encoding techniques, potentially resulting in suboptimal performance or conflicting representations.
Finally, we explore the impact of the hyperparameter γ in Time-Aware Dual RoPE. As shown in Figure <ref>, we evaluate TC-LLaVA on MVbench by setting the manual ratio γ across [0.1, 0.3, 0.5, 0.7, 1.0, 1.5, 2.0]. Compared to the baseline setting, which uses a single global RoPE (indicated by the red dashed line), introducing our Time-Aware RoPE increases performance, particularly when γ is close to 1.0, achieving the best performance at 56.0%. However, further increasing γ slightly reduces the final performance. We think this occurs because increasing γ too much might distort the original global position relationships encoded by the original RoPE, leading to suboptimal integration of spatial and temporal information. In the end, we choose γ as 1.0 for TC-LLaVA's experimental setting across the entire paper.
§.§.§ Attention Mask and Combination Ablation
We further explore other attention mask variances mentioned above. As shown in Table <ref>, using Full Visual and Frame-wise (Fw.) Block Masks enhances visual token interactions within frames but weakens or sacrifices causal relationships. This is crucial for video understanding, as future frames should be able to reference previous frames, but previous frames should avoid seeing future frames, similar to the way text sequences are handled in autoregressive generation. Our Fw. Block Causal Mask achieves better performance by considering both enhancing visual interactions and preserving the causal relationships between tokens. When combined with Time-Aware Dual RoPE, our TC-LLaVA demonstrates superior performance, scoring 56.6% on MVbench and 3.19 on VCGbench. This combination improves the Attention Module, the core component of the LLM, resulting in a more comprehensive and effective video understanding model.
§ OTHER TIME-AWARE POSITION EMBEDDING
To further verify the effectiveness of our Time-Aware Dual RoPE, we conducted the ablation study presented in Table <ref>, which demonstrates the comparative performance of different positional encoding (PE) settings on two benchmarks, MVbench <cit.> and VCGbench <cit.>. The experimental settings used are consistent with those outlined in the paper. The baseline setting RoPE achieves a score of 54.3 on MVbench and 3.09 on VCGbench. However, when introducing Time Absolute Position Encoding (APE) and Time Relative Position Encoding (RPE) individually, the models fail to converge. This issue arises because the pre-trained language model (LLM) is based on RoPE, and the inherent differences between APE, RPE, and RoPE result in significant alterations to the inter-layer features learned during pre-training. Consequently, these discrepancies cause instability in the training loss, leading to fluctuations that hinder the model's ability to converge effectively. Therefore, the effective approach to incorporating temporal information into a pre-trained LLM is through Time RoPE, as it minimizes conflicts with the pre-existing model configurations. By aligning more closely with the RoPE framework used during pre-training, Time RoPE ensures a smoother integration of temporal features, thereby reducing instability during supervised finetuning (SFT) stage. By combining with the original RoPE, our proposed RoPE + Time RoPE (Time-Aware Dual RoPE) achieves further improvement and outperforms all other configurations, with enhanced scores of 56.0 on MVbench and 3.15 on VCGbench, demonstrating the effectiveness of our approach in PE setting methods and leveraging the spatial-temporal positional information.
§ TC-ATTENTION ON DIFFERENT BASE MODEL
To further assess the generalizability and robustness of our TC-Attention mechanism, we extended its application beyond Vicuna-7B-v1.5 to include other pre-trained LLM base models, specifically Llama3-8B-Instruct <cit.> and Mistral-7B-Instruct-v0.2 <cit.>. Each model was initialized from the pre-trained LLaVa-Next models <cit.>[https://huggingface.co./collections/llava-hf/llava-next-65f75c4afac77fd37dbbe6cf], with all subsequent fine-tuning experiments conducted under consistent SFT settings. The performance of the fine-tuned models was evaluated on two benchmarks, MVbench <cit.> and VCGbench <cit.>, with the results summarized in Table <ref>. Across different base models, the introduction of TC-Attention led to measurable improvements in performance on both benchmarks. These results underscore the efficacy of TC-Attention, demonstrating its ability to enhance the performance of diverse base models. The consistent gains observed across different architectures not only validate the adaptability of TC-Attention but also highlight its potential as a valuable component in optimizing the performance of large language models on complex tasks.
§.§.§ Attention Visualization
Finally, we illustrate the attention weights of both our TC-Attention and Vanilla Attention. For this experiment, we compare the video-finetuned LLaVA and TC-LLaVA by inputting the same video test samples and visualizing the average attention weights of different heads in the final decoding layer of the LLM. In the visualization of Figure <ref>, brighter colors represent higher weights while the darker color represent lower weights. The attention weights assigned to visual tokens are markedly more comprehensive and greater in our TC-Attention. This indicates that, unlike Vanilla Attention, which only focuses on the last few visual tokens of each frame, our TC-Attention attends to every visual token within and across frames. Additionally, the proposed TC-Attention assigns greater attention weight to subsequent text (user input), resulting in a considerably more substantial impact of visual tokens on language tokens. This demonstrates the effectiveness of TC-Attention in integrating visual and textual information, enhancing the model's overall understanding and performance.
§ CONCLUSION
In this work, we present TC-LLaVA, rethinking the attention design in large language models (LLM) for video tasks. We introduce two core components to achieve this: Temporal-Aware Dual RoPE, incorporating temporal information into the attention module while maintaining the global position information between visual and text tokens, and Frame-wise Block Causal Attention Mask, enhancing the interaction of visual tokens within frames while preserving causal relationships across video frames. By conducting simple supervised finetuning (SFT) on video-related instruction datasets, our TC-LLaVA achieves new state-of-the-art performance across various video understanding benchmarks, showcasing the effectiveness of these methods. As LLMs continue to scale up, their powerful performance has led to the negligence of some design details. We hope our work encourages researchers to rethink these design aspects.
ieee_fullname
|
http://arxiv.org/abs/2409.03726v1 | 20240905173150 | Cyclic homology of Jordan superalgebras and related Lie superalgebras | [
"Consuelo Martínez",
"Efim Zelmanov",
"Zezhou Zhang"
] | math.RA | [
"math.RA",
"math.RT",
"17B60, 17B66"
] |
label1]Consuelo Martínez
label2]Efim Zelmanov
label3]Zezhou Zhang
[label1]organization=Departamento de Matemáticas, Universidad de Oviedo,
addressline=C/ Calvo Sotelo s/n,
city=Oviedo,
postcode=33007,
country=Spain
[label2]organization=SICM, Southern University of Science and Technology,
city=Shenzhen,
postcode=518055,
country=China
[label3]organization=Department of Mathematics, Beijing Normal University,
city=Beijing,
postcode=100875,
country=China
§ ABSTRACT
We study the relationship between cyclic homology of Jordan superalgebras and second cohomologies of their Tits-Kantor-Koecher Lie superalgebras.
In particular, we focus on Jordan superalgebras that are Kantor doubles of bracket algebras.
The obtained results are applied to computation of second cohomologies and universal central extensions of Hamiltonian and contact type Lie superalgebras over arbitrary rings of coefficients.
Jordan algebra superalgebra superconformal algebra
[2020] Primary 17B60, 17B66 Secondary 17A70 17C70, 17B68, 81R10
§ INTRODUCTION
Let L be a (super) algebra that is a Tits-Kantor-Koecher construction of J, a Jordan (super) algebra.
S. Tan (in <cit.>) and, a bit later, B. Allison, G. Benkart, Y. Gao in <cit.> linked second cohomology of the (super) algebra L to cyclic homology of the (super) algebra J, an analog of cyclic homology introduced by A. Connes (see <cit.> and <cit.>).
For the classical case of superconformal algebras K(1:n), an explicit description of second cohomologies is due to V. G. Kac and J. van de Leur <cit.>.
We study, in this paper, the relationship between cyclic homology of Jordan superalgebras and second cohomologies of their Tits-Kantor-Koecher Lie superalgebras. We note that apart from this work and the above mentioned <cit.> and <cit.>, we are not aware of any other references concerning cyclic homology of Jordan (super) algebras.
In particular, we focus on Jordan superalgebras that are Kantor doubles of
bracket algebras. The main results related to cyclic homology of Kantor doubles appear in Section 4.
The first 3 sections of the paper intend to provide the reader of all needed definitions and preliminary results concerning Lie superalgebras (Section 1), Jordan superalgebras (Section 2) and Brackets (Section 3).
Our interest in Kantor doubles stems from the fact that these Jordan superalgebras are related to superconformal algebras via the Tits-Kantor-Koecher construction.
The obtained results are applied to the computation of second cohomologies and universal central extensions of Hamiltonian and contact type Lie superalgebras over arbitrary rings of coefficients, in sections 5 through 7.
Due to these applications to superconformal algebras – where the fact of zero characteristic is needed – we will assume that all considered vector spaces are over a field of zero characteristic, even though all results in Section 2 are valid over fields of characteristics not 2,3.
§ PRELIMINARIES (LIE SUPERALGEBRAS)
Let L=L_0̅+L_1̅ be a Lie superalgebra.
A bilinear mapping L× L→ F, a× b↦ (a|b)∈ F is called a 2-cocycle if and only if it is super skew-symmetric and
([a, b]|c)+(-1)^|a|(|b|+|c|)([b, c]|a)+(-1)^|c|(|a|+|b|)([c, a]|b)=0
for arbitrary elements a,b,c∈ L.
For an arbitrary bilinear functional λ :L→ F,
the bilinear mapping (a|b)=λ ([a,b]) is a 2-cocycle.
Such cocycles are called 2-coboundaries.
Let C^2(L) be the vector space of all 2-cocycles of L, and
B^2(L) be the vector space of all 2-coboundaries of L.
A Lie superalgebra L is said to be perfect if L=[L,L].
Let L, L be perfect Lie superalgebras.
A surjective homomorphism Lφ⟶L
is called a central extension if the kernel of φ lies in the center
Z(L) of the superalgebra L.
Following ideas of I. Schur <cit.>, H. Garland <cit.> showed that
for an arbitrary perfect Lie (super)algebra L, there exists
a unique universal central extension Lu⟶L such that for an arbitrary central extension Lφ⟶L,
there exists a homomorphism χ :L→L making the diagram
L[r,"u"] [d,"χ"] L
L[ur,"φ"]
commutative.
The vector space H^2(L) of second cohomologies can be identified with the dual space Z(L)^*, where Z(L) is the center of the universal central extension L.
The center Z(L) can be identified with the space (L⊗ L)1.25V, where V is the subspace spanned by all tensors
a⊗ b + (-1)^|a|·|b|b⊗ a, [a,b]⊗ c + (-1)^|a|(|b|+|c|)[b, c]⊗ a + (-1)^|c|(|a|+|b|)[c, a]⊗ b
with a,b,c∈ L.
For further references to universal central extensions see <cit.>.
Suppose that a Lie superalgebra L is ℤ-graded, L=∑ _i∈ℤL_i = L_-2+L_0+L_2, all homogeneous components L_i, i≠ -2,0,2 are equal to 0.
Suppose also that L_0 = [L_-2,L_2].
Let (x | y) be a 2-cocycle on L.
Then for arbitrary elements a_-2,c_-2∈ L_-2; b_2,d_2∈ L_2, we have
([a_-2,[c_-2,d_2]] | b_2) + (-1)^|b|· (|c|+|d|)(a_-2 | [b_2, [c_-2,d_2]])
= (-1)^|b|· (|c|+|d|)([[a_-2,b_2],c_-2] | d_2) +
(-1)^|a|· |c| + |b|· |d|(c_-2 | [[a_-2,b_2],d_2]).
In fact, both sides are equal to (-1)^|b|· (|c|+|d|)([a_-2,b_2] | [c_-2,d_2]).
A bilinear mapping L_-2× L_2→ F, a_-2× b_2↦ (a_-2 | b_2) satisfying (<ref>) uniquely extends to a 2-cocycle on L.
Without loss of generality, we assume that L is a Lie algebra.
We define the 2-cocycle ψ(x|y) on L as follows:
ψ (a_-2 | b_2)=(a_-2 | b_2),
ψ ([a_-2,b_2] | [c_-2,d_2])= ([a_-2,[c_-2,d_2]] | b_2) + (-1)^|b|· (|c|+|d|)(a_-2 | [b_2, [c_-2,d_2]]),
ψ (L_i | L_j)=(0) for i+j≠ 0.
The identity (<ref>) implies that
ψ (∑ _i [a_-2^(i),b_2^(i)], ∑ _j [c_-2^(j),d_2^(j)])=∑ _i,jψ ([a_-2^(i),b_2^(i)],[c_-2^(j),d_2^(j)]))
is well-defined.
Let us check that
ψ ([x,y] | z) + ψ ([y,z] | x) + ψ ([z,x] | y) = 0.
We need to consider only two cases:
* x=a_-2,y=b_2,z=[c_-2,d_2];
* x,y,z∈ [L_-2,L_2].
Case 1. We need to show that
ψ ([a_-2,b_2] | [c_-2,d_2]) + ψ ([b_2, [c_-2,d_2]] | a_-2) + ψ ([[c_-2,d_2], a_-2] | b_2) = 0.
This expression is equal to
([a_-2, [c_-2,d_2]] | b_2) + (a_-2 | [b_2, [c_-2,d_2]])
- (a_-2 | [b_2, [c_-2,d_2]]) + ([[c_-2,d_2], a_-2] | b_2) = 0.
Case 2. Let ρ_1,ρ_2,ρ_3∈ [L_-2,L_2],ρ_2=[a_-2,b_2].
Consider the mapping
L_-2⊗ L_2ψ⟶F, x_-2⊗ y_2ψ↦(x_-2 | y_2).
The tensor product L_-2⊗ L_2 is a module over the Lie algebra [L_-2,L_2].
Denote ρ_2=a_-2⊗ b_2.
We need to show that
ψ ([ρ_1,ρ_2] | ρ_3) + ψ ([ρ_2,ρ_3] | ρ_1) + ψ ([ρ_3,ρ_1] | ρ_2) = 0.
Taking into account that
ψ ([ρ_1,ρ_2] | ρ_3)=-ψ (ρ_3 | [ρ_1,ρ_2]), ψ ([ρ_2,ρ_3] | ρ_1)=ψ (ρ_1 | [ρ_3,ρ_2]),
the last equality follows from
ψ(-[ρ_3, [ρ_1,ρ_2]]+[ρ_1, [ρ_3,ρ_2]]+[[ρ_3,ρ_1],ρ_2])=0,
which is the Jacobi identity.
This completes the proof of the lemma.
§ JORDAN SUPERALGEBRAS
§.§ Preliminaries
We start with basic definitions.
A Jordan algebra is a vector space J with a binary bilinear equation (x,y)↦ xy satisfying the following identities:
xy=yx, (x^2y)x=x^2(yx).
See <cit.>.
Let V be a vector space with countable dimension and let G=G(V) denote the Grassmann (or exterior) algebra over V; that is, the quotient of the tensor algebra over the ideal generated by symmetric tensors.
Then G(V) is a ℤ/2ℤ-graded algebra,
G(V)=G(V)_0̅+G(V)_1̅.
Its even part G(V)_0̅ is the linear span of all tensors of even length, and the odd part G(V)_1̅ is the linear span of all tensors of odd length.
Let V be a variety of algebras defined by homogeneous identities (see <cit.>). A superalgebra A=A_0̅+A_1̅ is called a V-superalgebra if its Grassmann envelope G(A)=A_0̅⊗ G(V)_0̅ + A_1̅⊗ G(V)_1̅ lies in V.
Thus a Jordan superalgebra is a ℤ/2ℤ-graded algebra J=J_0̅+J_1̅ satisfying the graded identities
xy=(-1)^|x|· |y|yx (supercommutativity), and
((xy)z)t + (-1)^|y|· |z| + |y|· |t| + |z|· |t|((xt)z)y + (-1)^|x|· |y| + |x|· |z| + |x|· |t| + |z|· |t|((yt)z)x
= (xy)(zt) + (-1)^|y|· |z|(xz)(yt) + (-1)^|t|·(|y|+|z|)(xt)(yz).
For an element a∈ J_0̅∪ J_1̅, let R(a) denote the operator of right multiplication R(a):J∋ x↦ xa, where xR(a)R(b)=(xa)b. [In this paper operators are constantly applied to the right of vectors.]
For arbitrary elements a,b∈ J_0̅∪ J_1̅, the operator D(a,b)=R(a)R(b) - (-1)^|a|· |b|R(b)R(a) is a derivation of the superalgebra J. Such derivations are called inner derivations. The span of all inner derivations is a Lie superalgebra. We will denote it as Inder(J).
For more references on Jordan superalgebras see <cit.>.
J. Tits <cit.> made the following observation. Let L be a Lie (super) algebra, L_0̅ contains 𝔰𝔩(2) = Fe + Ff + Fh, [e,f]=h, [h,e]=2e, [h,f]=-2f (we call such triple e,f,h an 𝔰𝔩(2)-triple).
Suppose that the operator ad(h): L→ L is diagonalizable and has eigenvalues -2,0,2, so L=L_-2+L_0+L_2 is a direct sum of eigenspaces. Then J=(L_2, a· b=[[a,f],b]) is a Jordan (super)algebra.
Moreover, J. Tits <cit.>, I. Kantor <cit.> and M. Koecher <cit.> showed that every Jordan (super)algebra can be obtained in this way.
The corresponding Lie (super)algebra L is not unique. We will recall the constructions of two Lie superalgebras with these properties: the largest (universal) one and the smallest (reduced) one.
For elements x,y,z∈ J_0̅∪ J_1̅ of a Jordan superalgebra J, we consider their Jordan triple product
{ x,y,z } = (xy)z + x(yz) - (-1)^|x|· |y|y(xz).
Fix elements y,z and consider the operator V(y,z): x↦{ x,y,z }. Then V(y,z)=D(y,z)+R(yz).
§.§ The universal Tits-Kantor-Koecher construction
We will introduce in the following, the TKK construction of a unital Jordan (super)algebra in the shortest way, using bases: even though it is possible to do it in basis-free ways.
Let J be a Jordan (super)algebra with an identity 1. Consider a basis { e_i } _i∈ I of J. Let
{ e_i,e_j,e_k } = ∑ _tγ _ijk^te_t, γ _ijk^t∈ F.
Define a Lie (super)algebra L by generators { x_i^-, x_j^+} _i,j and relations
[[x_i^+, x_j^-],x_k^+]=2 ∑ _tγ _ijk^t x_t^+,
[[x_i^-, x_j^+],x_k^-]=2 ∑ _tγ _ijk^t x_t^-,
[x_i^-, x_j^-] = [x_i^+, x_j^+] = 0.
The resulting algebra L is ℤ-graded (let x_i^+=2, x_i^-=-2).
Moreover, L is spanned by x_i^+, x_j^-, [x_i^+, x_j^-], which implies that L_i=(0) for i≠ -2,0,2.
Choose x_1=1. Then x_1^+, x_1^-, [x_1^+, x_1^-] is an 𝔰𝔩(2)-triple, J^+=span(x_i^+, i∈ I)=L_2, J^-=span(x_i^-, i∈ I)=L_-2 are eigenspaces of L with respect to ad(h).
The (super)algebra L=TKK(J) is universal in the following sense. Let L' be a Lie (super)algebra, L'_0̅⊃𝔰𝔩(2)= Fe' + Ff' + Fh', L'=L'_-2+L'_0+L'_2, L'_0=[L'_-2, L'_2], (L^-_2,∘ )≅ J.
Then there exists an epimorphism
φ :TKK(J)φ⟶L', φ (x_1^+)=e', φ (x_1^-)=f',
the kerφ lies in the center of TKK(J).
It is easy to see that a Lie (super)algebra with this universal property is unique. In particular, the construction above does not depend on a choice of a basis in J.
§.§ The reduced Tits-Kantor-Koecher construction
Again, let J be a Jordan (super)algebra with 1.
Consider two copies of the vector space J: J^-, J^+, and their direct sum J^-⊕ J^+.
For arbitrary elements a^-∈ J^-,b^+∈ J^+, consider the linear operator
δ (a^-,b^+):J^-⊕ J^+→ J^-⊕ J^+, x^-↦ -(-1)^|a||b|{ x,b,a } ^-, x^+↦{ x,a,b } ^+.
It follows from Jordan identities that the span δ (J^-,J^+) is a Lie (super)algebra.
The direct sum of vector spaces L=TKK(J)=J^-⊕δ (J^-,J^+)⊕ J^+ with the operations [J^-,J^-]=[J^+,J^+]=(0), [a^-,b^+]=2δ (a^-,b^+) is a Lie superalgebra.
The elements 1^-,1^+,2δ (1^-,1^+) form an 𝔰𝔩(2)-triple. The Lie superalgebra TKK(J) is called the reduced Tits-Kantor-Koecher Lie (super)algebra of J. It is easy to see that Z(TKK(J))=(0).
§.§ Cyclic homology of Jordan (super)algebras
Let J be a Jordan (super)algebra with the identity element e.
Let L=TKK(J)=J^++δ (J^+,J^-)+ J^-.
Let (x|y) be a 2-cocycle on the (super)algebra L.
The mapping J^+⊗ e^-→ [J^+, e^-], a^+⊗ e^-↦ [a^+, e^-] is a bijection.
Indeed, if [a^+, e^-]=0, then [[a^+, e^-],e^+]=2a^+=0.
Hence we can define a linear functional λ : L→ F such that for an arbitrary element a∈ J we have (a^+ | e^-)=λ ([a^+,e^-]).
Subtracting the coboundary corresponding to λ we can assume that
(J^+ | e^-)=0.
Let C_0^2(L) be the vector space of 2-cocycles satisfying (<ref>).
We have shown that C^2(L)=C_0^2(L) + B^2(L).
Let (x|y) be a 2-cocycle from C_0^2(L). Then (J^- | e^+)=0.
An arbitrary element from J^- can be represented as [[a^+, e^-],e^-], a^+∈ J^+.
Applying the identity ([x,y] | z) = (x | [y,z])- (-1)^|x|· |y|(y | [x,z]) twice, we get
([[a^+, e^-],e^-] | e^+) = ([a^+, e^-] | [e^-,e^+]) - (e^- | [[a^+, e^-],e^+])
= (a^+ | [e^-,[ e^-,e^+]]) - (e^- | [a^+, [e^-,e^+]]) - (e^- | [[a^+, e^-],e^+])
= -2(a^+ | e^-) - 4(e^- | a^+) = 2(a^+ | e^-) = 0.
From now on we consider a 2-cocycle (x | y) such that
(J^- | e^+)=(J^+ | e^-)=(0).
Define the bilinear mapping J× J→ F as (a | b)=(a^+ | b^-).
For arbitrary elements a,b∈ J_0̅∪ J_1̅, we have (a | b) + (-1)^|a|· |b| (b | a)=0.
(a^+ | b^-) = -1/2 ([[a^-,e^+],e^+] | b^-)
= -1/2 ( ([a^-,e^+] | [e^+,b^-]) - (e^+ | [[a^-,e^+],b^-]) )
= -1/2 ([a^-,e^+] | [e^+,b^-])
= -1/2 ( (a^- | [e^+,[e^+,b^-]]) - (e^+ | [a^-,[e^+,b^-]]) )
= (a^- | b^+),
since [e^+,[e^+,b^-]]=-2b^+. Now, (a^- | b^+)=-(-1)^|a|· |b|(b^+ | a^-), which completes the proof of the lemma.
For arbitrary elements a,b,c∈ J, we have
(ab | c) + (-1)^|a|(|b|+|c|)(bc | a) + (-1)^|b|· |c|(ac | b) = 0.
Let a,b,c,d∈ J_0̅∪ J_1̅. The equality (<ref>) applied to ([a^+,b^-] | [c^+,d^-]) yields:
([[a^+,b^-],c^+] | d^-) - (-1)^|c|· |b| + |c|· |a| + |a|· |b| (c^+ | [[b^-,a^+],d^-])
= - (-1)^|b|· |c| + |b|· |d| + |c|· |d| ([a^+,[d^-,c^+]] | b^-) + (a^+ | [b^+,[c^+,d^-]]).
In the language of Jordan triple products, it looks as:
({ a,b,c } | d) - (-1)^|c|· |b| + |c|· |a| + |a|· |b| (c | { b,a,d } )
= - (-1)^|b|· |c| + |b|· |d| + |c|· |d| ({ a,d,c } | b) + (a | { b,c,d }).
Equivalently,
({ a,b,c } | d) + (-1)^|a|· |b| + |d|· |c| ({ b,a,d } | c)
+ (-1)^|b|· |c| + |b|· |d| + |c|· |d| ({ a,d,c } | b)
+ (-1)^|a|(|b|+|c|+|d|)({ b,c,d } | a) = 0.
Let d=e. Then we get the assertion of the lemma.
Let J be a Jordan (super) algebra. We call a (super) skew-symmetric bilinear mapping J× J→ F a cyclic cocycle if the identity (<ref>) holds.
The identity (<ref>) immediately implies that (J | e)=(0).
Let C(J) be the vector space of all cyclic cocycles on J.
We have described the linear mapping C_0^2(L)μ⟶C(J).
Since a 2-cocycle on L is uniquely determined by its values on J^+⊗ J^-, it follows that the mapping μ is injective.
The mapping μ is bijective.
We need to show that (<ref>) implies (<ref>).
Again without loss of generality, we will consider the case of algebras, not superalgebras.
Choose arbitrary elements a,b,c,d∈ J.
The left hand side of (<ref>) is:
((ab)c^(1) + a(bc)^(2) - b(ac)^(3) | d) + ((ab)d^(1) + b(ad)^(4) - (bd)a^(5) | c)
+ ((ad)c^(4) + a(dc)^(6) - (ac)d^(3) | b) + ((bc)d^(2) + b(cd)^(6) - (bd)c^(5) | a).
The upper script is the number of the “grouping".
For example, in the group (1), the identity (<ref>) is applied to three elements ab,c,d.
We get
- (dc | ab)^(1) - (ad | bc)^(2) + (bd | ac)^(3)
- (bc | ad)^(4) + (ac | bd)^(5) - (ab | dc)^(6).
This expression is equal to 0 since the cocycle (x | y) is skew-symmetric.
Recall that Inder(J) is the span of all inner derivations of a Jordan superalgebra J, and Inder(J) is a Lie superalgebra.
For arbitrary elements a,b,c∈ J_0̅∪ J_1̅, we have
D(ab,c) + (-1)^|a|(|b|+|c|)D(bc,a) + (-1)^|b|· |c|D(ac,b) = 0
(see <cit.>).
Let λ :Inder(J)→ F be a linear functional.
From (<ref>) it follows that (a | b)=λ (D(a,b)) is a cyclic cocycle of the Jordan super-algebra J.
We call such cocycles cyclic coboundaries.
Let B(J) be the vector space of all cyclic coboundaries of J.
Following S. Tan <cit.> and B. Allison, G. Benkart, Y. Gao <cit.>[In <cit.>, HC(J) is called the full skew-dihedral homology group.], we call HC(J)=C(J)/B(J) the cyclic homology space of the Jordan (super) algebra J.
In the important case of J coming from associative algebra, HC(J) is the first cyclic homology group of A. Connes.
μ (B^2(L)∩ C_0^2(L)) = B(J).
Let a 2-cocycle φ∈ C_0^2(L) be a coboundary. It means that there exists a linear functional λ :L→ F such that φ (a | b)=λ ([a,b]). In particular, λ ([J^+,e^-])=λ ([J^-,e^+])=(0).
Let ψ = μ (φ). Then for arbitrary elements a,b∈ J, we have ψ (a | b)=λ ([a^+,b^-]).
By Lemma 3, λ ([a^+,b^-]) = - (-1)^|a|· |b|λ ([b^+,a^-])=1/2(λ([a^+,b^-])-(-1)^|a|· |b|λ ([b^+,a^-])).
Define the linear functional δ : Inder(J)→ F, δ (D(a,b))=λ ([a^+,b^-]).
We need to show that ∑ _i D(a_i,b_i)=0 implies λ (∑ _i [a_i^+,b_i^-])=0. Without loss of generality, we assume that |a_i|+|b_i| does not depend on i. We will show that ∑ _i D(a_i,b_i)=0 implies ∑ _i ([a_i^+,b_i^-] - (-1)^|a_i|· |b_i|[b_i^+,a_i^-]) ∈ Z(L). Therefore ∑ _i ([a_i^+,b_i^-] - (-1)^|a_i|· |b_i|[b_i^+,a_i^-]) = 0 as L is the reduced Tits-Kantor-Koecher Lie superalgebra of J.
For an arbitrary element c∈ J, we have
[∑ _i ([a_i^+,b_i^-] - (-1)^|a_i|· |b_i|[b_i^+,a_i^-]),c^+]
=2 ∑ _i ({ a_i,b_i,c } ^+ - (-1)^|a_i|· |b_i|{ b_i,a_i,c } ^+ )
= -4(-1)^|c|(|a_i|+|b_i|)(c∑ _i D(a_i,b_i))^+ = 0.
Similarly,
[∑ _i ([a_i^+,b_i^-] - (-1)^|a_i|· |b_i|[b_i^+,a_i^-])),J^-]=(0).
We showed that
μ (B^2(L)∩ C_0^2(L)) ⊆ B(J).
Now let δ : Inder(J)→ F be a linear functional. We will show that for arbitrary elements a_i,b_i∈ J, ∑ _i [a_i^+,b_i^-]=0 implies ∑ _i D(a_i,b_i)=0.
For an arbitrary element x∈ J, we have { x,a,b } = (xa)b + x(ab) - (-1)^|a|· |b|(xb)a = x(D(a,b)+R(ab)). Hence
[x^-,∑ _i ([a_i^+,b_i^-]]=2(x(∑ _i D(a_i,b_i) + R(∑ _i a_ib_i)))^-,
which implies
∑ _i D(a_i,b_i) + R(∑ _i a_ib_i) = 0.
Applying the left hand side operator to x=e, we will get ∑ _i a_ib_i = 0, and therefore, ∑ _i D(a_i,b_i) = 0.
Define the linear functional λ : L→ F as follows:
λ (J^+) = λ (J^-) = (0), λ ([a_i^+,b_i^-])=δ (D(a_i^+,b_i^-)).
The cocycle corresponding to λ lies in C_0^2(L) and μ (λ )=δ.
We proved the following theorem.
H^2(L)≅ HC(J).
Let J be a finite dimensional simple Jordan algebra with the trace tr: J→ F (see <cit.>). Let F[t,t^-1] be the algebra of Laurent polynomials, let Resf(t) denote the coefficient of the polynomial f(t) at t^-1.
Then,
(J⊗ F[t,t^-1])× (J⊗ F[t,t^-1]) → F, (af(t) | bg(t))=tr(ab)Res(f'(t)g(t)),
a,b∈ J, is a cyclic cocycle on the Jordan algebra J⊗ F[t,t^-1] and
_F HC(J⊗ F[t,t^-1]) = 1.
§ BRACKETS
Let A=A_0̅+ A_1̅ be an associative commutative superalgebra. A binary bilinear operation [ , ]:A× A→ A is called a Poisson bracket if
* (A,[ , ]) is a Lie superalgebra,
* [ab,c]=a[b,c]+(-1)^|b|· |c|[a,c]b.
Let A=F[p_1,… ,p_n,q_1,… ,q_n] be a polynomial algebra in 2n variables. Then
[f,g]=∑ _i=1^n(∂ f/∂ p_i∂ g/∂ q_i -∂ f/∂ q_i∂ g/∂ p_i )
is a Poisson bracket in A.
Let G(n) be the Grassmann algebra on an n-dimensional vector space V. Let ξ _1,… ,ξ _n be a basis of V. The bracket [ξ _i,ξ _j]=δ _ij, 1≤ i,j≤ n, uniquely extends to a Poisson bracket on the superalgebra G(n)=G(n)_0̅+ G(n)_1̅.
An associative commutative superalgebra with a Poisson bracket is called a Poisson superalgebra.
The Poisson superalgebra of Example 2 is denoted as H_n.
Given two Poisson superalgebras A,B, their tensor product is again a Poisson superalgebra:
[a_1⊗ b_1, a_2⊗ b_2]=(-1)^|b_1|· |a_2|([a_1,a_2]⊗ b_1b_2 + a_1a_2⊗ [b_1,b_2]).
I. Kantor <cit.> noticed that if A is a Poisson superalgebra then the vector space J=A+Av with the operation that extends the multiplication in A and a(bv)=abv, (bv)a=(-1)^|a|bav, (av)(bv)=(-1)^|b|[a,b]; a,b∈ A_0̅∪ A_1̅, is a Jordan superalgebra.
We call it the Kantor double of the bracket [ , ] and denote it as K(A, [ , ]) or simply K(A).
There exists, however, non-Poisson brackets whose Kantor doubles are Jordan superalgebras. We call such brackets Jordan brackets.
D. King and K. McCrimmon <cit.> characterized Jordan brackets in terms of identities.
Let A be an associative commutative superalgebra with an even derivation D. Then the bracket [a,b]=D(a)b-aD(b) is Jordan, though not Poisson. We say that [a,b] is a bracket of vector type.
We notice that if A is an associative commutative superalgebra with the identity element 1 and [ , ] is a Jordan bracket on A, then a'=[a,1] is an even derivation on A. For arbitrary elements a,b,c∈ A_0̅∪ A_1̅, we have
[ab,c]=a[b,c]+(-1)^|b|· |c|[a,c]b+abc' .
Let A=A_0̅+ A_1̅ be an associative commutative superalgebra with a Jordan bracket. Then there exists a unique Jordan bracket on A⊗ G(n) that
* extends the Jordan bracket on A,
* extends the Poisson bracket (see Example 3) on G(n),
* [ξ _1⋯ξ _k,a]=(k-1)ξ _1⋯ξ _k a' for an arbitrary element a∈ A.
Straightforward computation.
A binary bilinear product [ , ] : A× A→ A is called a contact bracket if
* (A, [ , ]) is a Lie superalgebra,
* the linear transformation D:a↦ [a,1], a∈ A, is an even derivation of A,
* [ab,c] = a[b,c] + (-1)^|b|· |c|[a,c]b + ab D(c) for arbitrary elements a,b,c∈ A_0̅∪ A_1̅.
N. Cantarini and V. Kac <cit.> noticed that Jordan brackets are in 1-1 correspondence with contact brackets. More precisely, if [a,b] is a contact bracket with derivation D(a)=[a,1], then ⟨ a,b ⟩ = [a,b]-1/2(D(a)b-aD(b)) is a Jordan bracket. Even derivations corresponding to the brackets [ , ],⟨ , ⟩ are different: ⟨ a,1 ⟩ = 1/2[a,1].
An associative commutative superalgebra A with a contact bracket is called a contact algebra.
Now we are ready to formulate the analog of Lemma 7 for contact algebras.
Let A be a contact algebra. Then there exists a unique contact bracket on A⊗ G(n) that
* extends the contact bracket on A,
* extends the Poisson bracket on G(n),
* [ξ _1⋯ξ _k,a]=k-2/2ξ _1⋯ξ _k a' for an arbitrary element a∈ A.
The lemma above is a partial answer to the Question 1 from <cit.>.
Let A be an associative commutative superalgebra with a contact bracket [a,b]. Following N. Cantarini and V. Kac <cit.>, we define the Jordan bracket ⟨ a,b ⟩ = [a,b]-1/2(D(a)b-aD(b)). Extend the Jordan bracket ⟨ a,b ⟩ to a Jordan bracket on A⊗ G(n) as in Lemma 7. Extend the contact bracket [a,b] to a contact bracket on A⊗ G(n+3) as in Lemma 8.
Let L=(A⊗ G(n+3),[ , ]).
Then (refn. <cit.>)
TKK(K(A⊗ G(n),⟨ , ⟩ ))≅ [L,L] .
§ CYCLIC HOMOLOGY OF KANTOR DOUBLES
Let A be an associative commutative superalgebra with a Jordan bracket [ , ]. Let J=K(A, [ , ]) be the Kantor double, J=A+Av, J_0̅=A_0̅+ A_1v, J_1̅=A_1̅+ A_0v.
§.§ Poisson centers
Let A be an associative commutative superalgebra with a Jordan bracket [x,y]. We define the Poisson center of A as Z_p={ u∈ A | u'=0, (cu)'+[c,u]=0, ∀ c∈ A }.
If [x,y] is a Poisson bracket then Z_p is the well known Poisson center.
Let J = A + Av the Kantor double Jordan algebra and λ : A→ F be a linear functional. Define a bilinear mapping J× J→ F, x× y↦ (x | y)_λ via (A | A)_λ=(0),(Av | A)_λ=(0), (av | bv)_λ=(-1)^|b|λ (ab); a,b∈ A.
(1) For an arbitrary linear functional λ∈ A^*, (x | y)_λ is a cyclic cocycle of J;
(2) (x | y)_λ is a coboundary if and only if λ (Z_p)=(0).
The assertion (1) is straightforward.
Let us check the assertion (2).
Choose elements a,b ∈ A_0̅∪ A_1̅ and consider the inner derivation D(av,bv)=R(av)R(bv)-(-1)^(|a|+1)(|b|+1)R(bv)R(av) of the Jordan superalgebra J=Kan(A). For an arbitrary element c∈ A we have
cD(av,bv) = cav· bv + (-1)^|a||b|+|a|+|b|cbv· av
= (-1)^|b|[ca,b]+(-1)^|a||b|+|b|[cb,a]
= (-1)^|b|+|a||b|[c,b]a+(-1)^|b|c[a,b]+(-1)^|b|cab'
.4cm +(-1)^|b|[c,a]b+(-1)^|b|+|a||b|c[b,a] +(-1)^|a||b|+|b|cba'
=(-1)^|b|((-1)^|a||b|[c,b]a+[c,a]b)+(-1)^|b|c(ab)' .
On the other hand,
[c,ab]=[c,a]b+(-1)^|a||b|[c,b]a-c'ab.
Hence
cD(av,bv)=(-1)^|b|([c,ab]+(cab)') .
Furthermore,
vD(av,bv)=(-1)^|a|[1,a]bv+(-1)^|a||b|+|a|[1,b]av=(-1)^|a|[1,ab]v=-(-1)^|b|v(ab)'.
This implies that ∑ _i D(a_iv,b_iv)=0 if and only if ∑ _i (-1)^|b_i|a_ib_i∈ Z_p.
Suppose that a cyclic cocycle (x | y)_λ is a coboundary. Then there exists a linear functional μ : Inder(J)→ F, such that (x | y)_λ = μ (D(x,y)). Let u∈ Z_p. Then D(uv,v)=0, which implies (uv | v)_λ=λ (u)=μ (0) = 0. We proved that λ (Z_p)=(0).
Conversely, suppose that λ (Z_p)=(0). Define μ : D(Av,Av) → F via μ (D(av,bv))=(-1)^|b|λ (ab). To show that this mapping is well-defined, we need to verify that ∑ _i D(a_iv,b_iv)=0 implies ∑ _i(-1)^|b_i|λ (a_ib_i)=0. But we know that ∑ _i D(a_iv,b_iv)=0 if and only if ∑ _i (-1)^|b_i| a_ib_i ∈ Z_p. And ∑ _i (-1)^|b_i| a_ib_i ∈ Z_p implies that ∑ _i(-1)^|b_i|λ (a_ib_i)=0 by our assumption.
Extend μ to a mapping Inder(J)→ F via μ (D(A,A))=μ (D(Av,A)) = (0). Hence the cocycle (x|y)_λ is a coboundary. This completes the proof of the lemma.
§.§ Bracket cyclic cocycles
Let (x | y) be a cyclic cocycle on J=Kan(A). Consider the linear functional λ : A→ F defined via λ (a)=(av | v). Now,
(av | bv) = (a | v· bv) - (-1)^|a|(v | a· bv) = (-1)^|b|+1(a | b') + (-1)^|b|+1λ (ab).
It follows that
(a' | b) = (-1)^|a|· |b|(b' | a).
Let us explore the cocycle condition for av,bv,c; a,b,c∈ A_0̅∪ A_1̅:
(av· bv | c) + (-1)^(|a|+1)(|b|+|c|+1)(bv· c | av) + (-1)^|c|(|a|+|b|)(c· av | bv) = 0.
In view of (<ref>), it is equivalent to
([a,b] | c) = (a' | bc) - (-1)^|a|· |b|(b' | ac).
We call a cyclic cocycle (x | y) a bracket cyclic cocycle if (<ref>) and (<ref>) hold.
(f | g) = Resf'g is a bracket cyclic cocycle on F[t,t^-1] relative to the bracket [f,g]=f'g-fg'.
We showed that if (x | y) is a cyclic cocycle on Kan(A), then
* the restriction of (x | y) to A× A is a bracket cyclic cocycle,
* (av | bv) = (-1)^|b|+1(a | b') -(a | b)_λ.
Let (x | y) be a bracket cyclic cocycle on A; let λ∈ A^*, J=Kan(A)=A+Av. Then the super skew-symmetric mapping J× J→ F that
* extends (x | y) on A× A,
* (av | bv) = (-1)^|b|+1(a | b') -(a | b)_λ,
* (Av | A)=(0),
is a cyclic cocycle on J.
Choose three elements for the cocycle identity. If all three elements lie in A or two elements of them lie in Av, then the identity follows from the computation above. If one or three elements lie in Av, then the identity follows from (Av | A)=(0).
Let C_br(A) be the vector space of all bracket cyclic cocycles A× A→ F.
Each bracket cyclic cocycle on A can be extended to a cyclic cocycle on J via (av | bv) = (-1)^|b|(a | b'), (Av | A)=(0). Hence C_br(A)⊆ C(J).
For arbitrary elements a,b∈ A, the inner derivation D(a,b) of J is zero. Hence C_br(A)∩ B(J)=(0) and therefore HC_br(A)=C_br(A)⊆ HC(J).
§.§ Mixed cocycles
We call a cyclic cocycle (x | y) on J mixed if (A | A)=(Av | Av)=(0).
Let ⟨ | ⟩ :A× A→ F be a bilinear mapping. The mapping ( | ) : J× J→ F, (a | bv) = ⟨ a | b⟩ , (bv | a) = -(-1)^|a|(|b|+1)⟨ a | b⟩ , (A | A)=(Av | Av)=(0) is a cyclic cocycle if and only if
(i) ⟨ [a,b] | c⟩ + (-1)^|a|(|b|+|c|)⟨ [b,c] | a⟩ + (-1)^|c|(|a|+|b|)⟨ [c,a] | b⟩ = 0,
(ii) ⟨ a | bc⟩ + (-1)^|a|(|b|+|c|)⟨ b | ca⟩ + (-1)^|c|(|a|+|b|)⟨ c | ab⟩ = ⟨ abc | 1⟩,
for arbitrary elements a,b,c∈ A_0̅∪ A_1̅.
The bilinear mapping (x | y) is super skew-symmetric by definition. We need to examine the cyclic cocycle identity for the following two triples:
* av,bv,cv;
* av,b,c,
where a,b,c∈ A_0̅∪ A_1̅.
Again to simplify computation, without loss of generality, we will assume that A=A_0̅.
Then
(av· b | c) + (bc | av) + (c· av | b)
= -(c | abv) + (bc | av) - (b | cav)
= - ⟨ c | ab⟩ + ⟨ bc | a⟩ - ⟨ b | ca⟩ .
Furthermore,
(av· bv | cv) + (bv· cv | av) + (cv· av | bv)
= ([a,b] | cv) + ([b,c] | av) + ([c,a] | bv)
= ⟨ [a,b] | c⟩ + ⟨ [b,c] | a⟩ + ⟨ [c,a] | b⟩ .
So, the cocycle identity is satisfied for elements av, bv, cv, a,b,c ∈ A if and only if the identity
(i) is satisfied.
Suppose that (x | y) is a cyclic cocycle. Then
(av· b | c) + (bc | av) + (c· av | b) = 0.
Doing b = c = 1 in (<ref>) we get
⟨ 1 | a ⟩ = (1 | av) = 0, a∈ A.
Applying the cocycle identity to elements a,b,v, we get (a | bv) + (b | va) + (v | ab) = 0, which implies
⟨ a | b ⟩ + ⟨ b | a ⟩ = ⟨ ab | 1 ⟩ .
Applying (<ref>) to the right hand side of (<ref>) we get
⟨ ab | c ⟩ - ⟨ abc | 1 ⟩ + ⟨ bc | a ⟩ + ⟨ ca | b ⟩ - ⟨ abc | 1 ⟩ =0.
That is,
⟨ ab | c ⟩ + ⟨ bc | a ⟩ + ⟨ ca | b ⟩ = 2 ⟨ abc | 1 ⟩ .
Again applying (<ref>) to each summand on the identity above, we get
- ⟨ c | ab ⟩ - ⟨ a | bc ⟩ - ⟨ b | ca ⟩ + 3 ⟨ abc | 1 ⟩ = 2 ⟨ abc | 1 ⟩ ,
which implies (ii).
Now suppose that ⟨ x | y ⟩ satisfies the identities (i) and (ii). As mentioned above, the identity (i) implies that the cocycle identity is satisfied for elements av, bv, cv, a,b,c ∈ A. Substituting b=c=1 in (ii), we get (<ref>). Substituting c=1 in (ii), we get (<ref>).
Now the identity (ii) together with (<ref>), (<ref>) imply that the last line of (<ref>) is equal to 0, hence the cyclic cocycle identity holds for elements av,b,c. This completes the proof that (x | y) is a cyclic cocycle.
Question. Which mixed cocycles are coboundaries?
Consider the linear mapping
μ : A⊗_F A ↦ A, (a ⊗ b)↦ [a,b]+(-1)^|a|· |b|b'a.
Denote W=Kerμ.
* ∑ _i D(a_i,b_iv)=0 if and only if ∑ _i a_i⊗ b_i∈ W;
* let ⟨ | ⟩ : A× A→ F be a bilinear mapping satisfying i) and ii) in Lemma 11. We can identify it with ⟨ | ⟩ : A⊗ A→ F, the unique linear map that defines. The cyclic cohomology (a | bv) = ⟨ a | b ⟩ , (A | A)=(Av | Av)=(0); a,b∈ A is a coboundary if and only if ⟨ , ⟩ vanishes on W.
We have AD(A,Av)=(0). Hence ∑ _i D(a_i,b_iv)=0 if and only if v∑ _i D(a_i,b_iv)=0.
Let a,b∈ A_0̅∪ A_1̅. Then
vD(a,bv) = (va)(bv) - (-1)^|a|(|b|+1)(v(bv))a
= (-1)^|a| + |b| [a,b] + (-1)^|a|· |b| + |a| + |b| b'a
= (-1)^|a| + |b| ([a,b] + (-1)^|a|· |b| b'a) .
It implies the assertion 1 of the lemma. The assertion 2 directly follows from the assertion 1, arguing as in the Proof of Lemma 6.
Let us summarize the obtained results.
§.§ Section Summary
Let Z_p be the Poisson center of an associative commutative superalgebra with a Jordan bracket and J = Kan(A). For an arbitrary linear functional λ : Z_p→ F, consider an extension λ : A→ F. By Lemma 9, the cohomology class (x | y)_λ + B(J) does not depend on a choice of an extension λ.
The vector space HC_Z(J)={ (x | y)_λ + B(J) | λ∈ Z_p^*} can be identified with the dual space Z_p^*.
Recall that C_br(A) denotes the vector space of bracket cyclic cocycle on the algebra A. An arbitrary bracket cyclic cocycle (x | y) extends to a cyclic cocycle on J via (av | bv) = (-1)^|a|· |b|(b' | a), (Av | A)=(0). This defines an embedding of C_br(A) into the vector space C(J) and, furthermore, into the cyclic homology space HC(J).
Let C_M(J) denote the vector space of mixed cyclic cocycles on J,
HC_M(J)=(C_M(J)+B(J))/B(J).
HC(J)=HC_Z(J)⊕ C_br(A)⊕ HC_M(J).
§ POISSON BRACKETS AND HAMILTONIAN SUPERALGEBRAS
In this section we compute cyclic homology of Kantor doubles K(A), where A=(A,[ , ]) is a Poisson superalgebra.
Recall that H_n=(F[p_1,… ,p_n,q_1,… ,q_n],[p_i,q_j]=δ _ij,[p_i,p_j]=[q_i,q_j]=0),n≥ 1 is a family of Poisson algebras.
Let B be an arbitrary Poisson superalgebra and let A=H_1⊗ B.
HC(A)= Z_p(B)^*, where Z_p(B) denotes the Poisson center of B.
Consider the Grassmann superalgebra G(3) with the standard Poisson bracket and the tensor product A⊗ G(3) of Poison superalgebras.
Abusing notation we denote the Poisson bracket on A⊗ G(3) as [ , ].
Consider the Lie superalgebra L=(A⊗ G(3), [ , ]).
The superalgebra [L,L] is isomorphic to the (universal) Tits-Kantor-Koecher construction of the Jordan superalgebra K(A).
Theorems 1, 2, 3 yield description of the vector space H^2([L̅,L̅])=H^2(TKK(K(A))), where L is the quotient of the superalgebra L modulo the center.
In particular we compute second homologies of Hamiltonian superalgebras.
Let A=H_n⊗ G(m),n≥ 1,m≥ 3. Let H(n,m) denote the Lie superalgebra (A,[ , ]).
_F H^2(H(n,m)) = 1.
Throughout this section we assume that A=H_1⊗ _F B, where B is a Poisson superalgebra and H_1=(F[p,q],[p,q]=1).
C_br(A)=(0).
If (a | b) is a bracket cyclic cocycle on A, then the identity (<ref>) implies that ([A,A] | A)=(0).
It remains to show that [A,A]=A.
Indeed, the equality p^iq^j=1/j+1[p,p^iq^j+1] implies that [H_1,H_1]=H_1. Consider an arbitrary tensor h⊗ b, h∈ H_1, b∈ B. Suppose that h=∑ _i [h_i',h_i”]. Then h⊗ b=∑ _i [h_i'⊗ b,h_i”⊗ 1].
Z_p(H_1⊗ B)=Z_p(B).
The inclusion 1⊗ Z_p(B) ⊆ Z_p(H_1⊗ B) is straightforward.
Suppose that an element c=∑ _i h_i⊗ b_i lies in Z_p(H_1⊗ B);h_i∈ H_1, b_i∈ B, the elements { b_i } _i are linearly independent. For an arbitrary element h∈ H_1 we have
[h⊗ 1,∑ _i h_i⊗ b_i]=∑ _i [h,h_i]⊗ b_i,
which implies that all elements [h,h_i] are equal to zero, h_i∈ Z_p(H_1)=F· 1. Hence c=1⊗ b,b∈ B. Again for an arbitrary element b'∈ B we have [1⊗ b',1⊗ b]=1⊗ [b',b]=0. Hence b∈ Z_p(B).
Now our aim is
HC_M(K(A))=(0).
Proof of this proposition requires several lemmas.
A mixed cyclic cocycle ( | ) on K(A) is a coboundary if and only if ∑ _i [a_i,b_i]=0; a_i,b_i∈ A_0̅∪ A_1̅ implies ∑ _i ⟨ a_i | b_i ⟩ = 0. Here ⟨ | ⟩ is the bilinear map defining ( | ).
This lemma immediately follows from Lemma 12 (2).
Let (x | y) be a cyclic cocycle on K(A). Then (A | v)=(0).
We showed in the proof of Lemma 13 that A=[A,A]=(Av)(Av). By the cyclic cocycle identity (av· bv | v) + (-1)^|b|(|a|+1)(bv· v | av) + (-1)^|a|+|b|(v· av | bv)=0. Since [ , ] is a Poisson bracket, it follows that bv· v=[b,1]=-b'=0,v· av=(-1)^|a|[1,a]=0.
Let (x | y) be a mixed cocycle on K(A). As above let ⟨ a | b ⟩ = (a | bv); a,b∈ A. By Lemma 16 we have ⟨ A | 1 ⟩ = (0).
Now the identity (<ref>) and Lemma 11 imply that
⟨ a | b ⟩ = - (-1)^|a|· |b|⟨ b | a ⟩ ,
⟨ a | bc ⟩ + (-1)^|a|(|b|+|c|)⟨ b | ca ⟩ + (-1)^|c|(|a|+|b|)⟨ c | ab ⟩ = 0,
⟨ [a,b] | c ⟩ + (-1)^|a|(|b|+|c|)⟨ [b,c] | a ⟩ + (-1)^|c|(|a|+|b|)⟨ [c,a] | b ⟩ = 0.
The identity (<ref>) repeats the first identity of Lemma 11 for convenience of a reader.
Consider the tensor square A⊗ A and the subspace S⊆ A⊗ A spanned by elements a⊗ b + (-1)^|a|· |b|b⊗ a,ab⊗ c + (-1)^|b|· |c|ac⊗ b + (-1)^|a|(|b|+|c|)bc⊗ a, [a,b]⊗ c + (-1)^|b|· |c|[a,c]⊗ b + (-1)^|a|(|b|+|c|)[b,c]⊗ a; a,b,c∈ A_0̅∪ A_1̅.
A⊗ A = p⊗ A + S.
We will prove the lemma in several steps.
* We notice that A=[p,A]. Indeed, p^iq^jb=1/j+1[p,p^iq^j+1b] for an arbitrary element b∈ B. Similarly, A=[q,A].
* For an arbitrary element a∈ A, 1⊗ a∈ S.
* For an arbitrary element a∈ A we have [p,q]⊗ a=p⊗ [q,a]-q⊗ [p,a] S, which implies that
p⊗ [q,a] - q⊗ [p,a] ∈ S.
In view of Step 1 we conclude that p⊗ A= q⊗ A S.
* Consider the linear operator P: A→ A, a↦ a+q[p,a].
This operator is a bijection. Indeed, if a=p^iq^jb,b∈ B, then P(a)=(1+j)p^iq^jb.
* Let a∈ A,b∈ B, then
b⊗ a = [p,qb]⊗ a = p⊗ [qb,a] - (-1)^|a|· |b|qb⊗ [p,a] S,
qb⊗ [p,a] = q⊗ b[p,a] + b⊗ q[p,a] S,
b[p,a]=[p,ba],
q⊗ b[p,a] = q⊗ [p,ba] = p⊗ [q,ba] S.
Hence,
b⊗ a = p⊗ [qb,a] - (p⊗ [q,ba] + b⊗ q[p,a]) S
= p⊗ ([q,ba] - [q,ba]) - b⊗ q[p,a] S .
Now,
b⊗ (a+q[p,a]) = p⊗ ([qb,a]-q[b,a]) S,
[qb,a]-q[b,a]=(-1)^|a|· |b|[q,a]b=b[q,a],
b⊗ P(a)=p⊗ b[q,a] S.
Finally, b⊗ a=p⊗ b[q,P^-1(a)] S.
* Obviously, A⊗ A= p⊗ A + q⊗ A + B⊗ A S = p⊗ A S by Step 3 and Step 5.
This completes the proof of the lemma.
Let (x | y) be a mixed cocycle on K(A). Let us check the condition of Lemma 15, that is, ∑ _i [a_i,b_i]=0; a_i,b_i∈ A_0̅∪ A_1̅ implies ∑ _i ⟨ a_i | b_i ⟩ = 0.
By Lemma 17, there exists an element c∈ A such that ∑ _i a_i⊗ b_i = p⊗ c S. If ∑ _i x_i⊗ y_i ∈ S, then ∑ _i [x_i, y_i] = 0. By (<ref>), (<ref>) and (<ref>), we also have ∑ _i ⟨ x_i | y_i ⟩ = 0. Hence we need to show that [p,c]=0 implies ⟨ p | c ⟩ = 0.
The equality [p,c]=0 implies c=∑ _i≥ 0 p^ib_i,b_i∈ B.
Let us show that ⟨ p | p^ib ⟩ = 0. We have p=1/2[p^2,q]. Hence, ⟨ p | p^ib ⟩ = 1/2⟨ [p^2,q] | p^ib ⟩ = 1/2 ( ⟨ p^2 | [q,p^ib] ⟩ - ⟨ q | [p^2,p^ib] ⟩ ).
If i≥ 1, then [q,p^ib] = -ip^i-1b. If i=0, then [q,b]=0. We also have [p^2,p^ib]=0. Assuming i≥ 1, we have ⟨ p | p^ib ⟩ = -i/2⟨ p^2 | p^i-1b ⟩ = -i ⟨ p | p^ib ⟩ and (i+1)⟨ p | p^ib ⟩ = 0. Hence ⟨ p | p^ib ⟩ = 0.
§ BRACKETS OF VECTOR TYPE
Let A be an associative commutative superalgebra with an even derivation ' : A→ A.
Consider a Jordan bracket [a,b]=a'b-ab'.
We make the additional assumption that 1∈ A_0̅'A_0̅.
We will determine bracket cyclic cocycles and Poisson center of the Kantor double K(A,[ , ]).
Let (a | b) be a bracket cyclic cocycle on A. By (<ref>) we have
(a'b-ab' | c) = (a' | bc) - (-1)^|a|· |b|(b' | ac).
On the other hand, the cocycle identity implies
(a'b | c) = (a' | bc) + (-1)^|a|· |b|(b | a'c),
(ab' | c) = (a | b'c) + (-1)^|a|· |b|(b' | ac).
Substituting these equalities to the left hand side of (<ref>) we get (a' | bc) + (-1)^|a|· |b|(b | a'c) - (a | b'c) - (-1)^|a|· |b|(b' | ac) = (a' | bc) - (-1)^|a|· |b|(b' | ac), which implies
(a | b'c) = (-1)^|a|· |b|(b | a'c).
We claim that there exists a linear functional λ :A→ F such that (a | b) = λ (a'b) for arbitrary elements a,b∈ A.
To prove the claim we need to show that ∑ _i a_i'b_i=0;a_i,b_i∈ A implies ∑ _i (a_i | b_i) = 0.
By (<ref>) for an arbitrary element c∈ A, we have
(c | a_i'b_i) = (-1)^|a_i|· |c|(a_i | c'b_i).
Hence ∑ _i (-1)^|a_i|· |c|(a_i | c'b_i) = 0.
If ∑ _i a_i'b_i=0, then for an arbitrary element x∈ A, we have ∑ _i a_i'b_ix=0. Hence ∑ _i (-1)^|a_i|· |c|(a_i | c'b_ix) = ∑ _i (-1)^|a_i|· |c| + |b_i|· |x|(a_i | c'xb_i) = 0.
Since 1∈ A_0̅'A_0̅, there exist elements c_j,x_j∈ A_0̅ such that ∑ _j c_j'x_j=1. Now ∑ _i,j (a_i | c_j'x_jb_i) = ∑ _i (a_i | b_i) = 0.
We have proved that (a | b) = λ (a'b); a,b∈ A for some linear functional λ∈ A^*.
For an arbitrary element a∈ A, (a | 1) = λ (a') = 0. Hence λ (A') = (0).
It is easy to see that if λ is a linear functional on A/A', then (a | b) = λ (a'b) is a cyclic cocycle on A. Hence C_br(A) can be identified with the dual space (A/A')^*.
If a lies in the Poisson center of A, then a'=0 and for an arbitrary element b∈ A, we have [b,a]+(ba)'=2b'a=0. Again from 1∈ A_0̅'A_0̅, it follows that a=0, so Z_p(A)=(0).
Remark. We don't have a description of the space of mixed cyclic cocycles HC_M(K(A)).
§ SUPERALGEBRAS A⊗ G(N),N≥ 1
Let A be an associative commutative superalgebra with a Jordan bracket. Let G(n)= ⟨ 1,ξ _1,… ,ξ _n | ξ _i ξ _j + ξ _j ξ _i = 0 ⟩ be the Grassmann algebra with a standard Poisson bracket. We have already mentioned above that both brackets on A and G(n) extend to a Jordan bracket on A⊗ G(n) via
[A, ξ _i]=(0); ξ _i'=0, 1≤ i≤ n.
In this section we assume that the bracket on A is of vector type [a,b]=a'b-ab', and 1∈ A_0̅'A_0̅.
Let us start with n=1.
Let (x | y) be a bracket cyclic cocycle on A⊗ G(1), G(1)=F· 1+Fξ, ξ ^2=0, [ξ ,ξ ]=1.
As shown in the previous section, there exists a linear functional λ∈ (A/A')^* such that (a | b) = λ (a'b); a,b∈ A. Choose a∈ A, x∈ A∪ Aξ. Then a=[aξ , ξ ]. Hence
(a | x) = ([aξ , ξ ] | x) = (a'ξ | ξ x) = (-1)^|a|(ξ | a'ξ x).
This implies that (A | Aξ )=(0).
Now let x=b∈ A. Then (a | b) = (-1)^|a|(ξ | a'ξb ). Since A'A=A, it follows that
(ξ | aξ )=(-1)^|a|λ (a), (ξ a | ξ b)=(-1)^|a|(ξ | ξ ab)=(-1)^|a|λ (ab),
for arbitrary elements a,b∈ A.
Again we conclude that C_br(A[ξ ])=(A/A')^*.
Let us show that Z_p(A[ξ ])=Z_p (A). Indeed, the inclusions Z_p (A)⊆ Z_p(A[ξ ]) and Z_p(A[ξ ])∩ A⊆ Z_p (A) are obvious. Let a∈ A such that aξ∈ Z_p(A[ξ ]). Then a=[aξ ,ξ ]=0. This proves the claim.
If [a,b]=a'b-ab' is a vector type bracket and A'A=A, then Z_p (A)=(0).
We have nothing to say about mixed cyclic cocycles of A.
Now let n≥ 2.
For an arbitrary associative commutative superalgebra A with a Jordan bracket, we have C_br(A⊗ G(n))=(0) for n≥ 2.
Let (x | y) be a bracket cyclic cocycle on A⊗ G(n). We have
(ξ _1 | x )=([ξ _1ξ _2 ,ξ _2 ] | x )=((ξ _1ξ _2)' | ξ _2 x )-(ξ _2' | ξ _1ξ _2 x )=0.
We have shown that (ξ _i | A⊗ G(n) )= 0, 1≤ i≤ n.
For an arbitrary element a∈ A, we have a=[aξ _1 ,ξ _1 ]. Hence,
(a | x) = ([aξ _1 ,ξ _1 ] | x ) = (a'ξ _1 | ξ _1 x) + (ξ _1' | aξ _1 x) = (a'ξ _1 | ξ _1 x).
Furthermore,
(a'ξ _1 | ξ _1 x) = (a' | ξ _1·ξ _1 x) + (ξ _1 | a'ξ _1 x) = 0+0=0.
We have shown that (A | A⊗ G(n) )= (0).
As in the case n=1, we get Z_p(A⊗ G(n))=Z_p(A).
Let L be a perfect Lie superalgebra. Suppose that there is h∈ L_0̅ such that L=⊕ _i∈ℤL_i, [h,a_i]=ia_i, ∀ a_i∈ L_i. Let (x | y) be a 2-cocycle on L. Then (L_i | L_j)=(0) whenever i+j≠ 0.
Let L be a universal central extension of L. It is known that L=L+Z, where
Z={∑ _i a_i⊗ b_i | ∑ _i [a_i,b_i]=0 } /I, and I is the linear span of the set { a⊗ b + (-1)^|a|· |b|b⊗ a, [a,b]⊗ c + (-1)^|a|(|b|+|c|)[b,c]⊗ a + (-1)^|c|(|a|+|b|)[c,a]⊗ b | a,b,c∈ L }.
The ℤ-grading L=⊕ _i∈ℤL_i extends to Z and to the superalgebra L.
Let us show that Z_n=(0) for n≠ 0. Indeed, Let (x | y) be a 2-cocycle of L and let z= ∑ _i a_i⊗ b_i, ∑ _i [a_i,b_i]=0, (a_i)+ (b_i)=n.
Since L is perfect, it follows that h=∑ _j [e_j,f_j];e_j,f_j∈ L. Now,
∑ _i ([h,a_i ] | b_i) + ∑ _i (a_i | [h,b_i ]) =
∑ _j ([e_j,∑ _i [a_i,b_i]] | f_j) + ∑ _j (e_j | [f_j,∑ _i [a_i,b_i]]]) = 0.
On the other hand, ∑ _i ([h,a_i ] | b_i) + ∑ _i (a_i | [h,b_i ]) = n∑ _i (a_i | b_i).
This completes the proof of the lemma.
The following result shed some light on mixed cyclic cocycles of A⊗ G(n).
Mixed cyclic cocycles on J=K(A⊗ G(n)) correspond to 2-cocycles (x | y) on Lie superalgebras (A⊗ G(n+3),[ , ]) such that (aξ _i_1⋯ξ _i_p | bξ _j_1⋯ξ _j_q)≠ 0, { i_1,… , i_p}≠{ j_1,… , j_q}.
For a subset π⊆{ 1,… ,n },π ={ i_1<… <i_p }, denote ξ _π = ξ _i_1⋯ξ _i_p.
Let (x | y) be a 2-cocycle on a Lie superalgebra (A⊗ G(n),[ , ]), where A is a contact superalgebra, [ , ] is a contact bracket on A⊗ G(n) that extends the contact bracket on A and the standard Poisson bracket on G(n).
Let (aξ _π | bξ _τ)≠ 0;a,b∈ A;π , τ⊆{ 1,… ,n } ,π≠τ. Then π∪̇τ = { 1,… ,n }.
Suppose that k∈τ∖π.
* Suppose at first that |π∩τ |≥ 2. Then π = π ' ∪̇π” is a disjoint union of two subsets, such that π '∩τ≠∅ and π”∩τ≠∅.
We have [aξ _π 'ξ _k , ξ _π”ξ _k]=± aξ _π. Hence
(aξ _π | bξ _τ) = ± ([aξ _π 'ξ _k , ξ _π”ξ _k] | bξ _τ) =
± (aξ _π 'ξ _k | [ξ _π”ξ _k , bξ _τ]) ± (ξ _π 'ξ _k | [aξ _π 'ξ _k , bξ _τ]) = 0.
* Now suppose that π∩τ = { i } , π '=π∖{ i } , τ '=τ∖{ i,k }, ξ _π=±ξ _iξ _π ', ξ _τ = ±ξ _i ξ _k ξ _τ '. Let
ξ = 1/2(ξ _i + √(-1)ξ _k),η = 1/2(ξ _i - √(-1)ξ _k),[ξ ,ξ ]=[η ,η ]=0,[ξ ,η ]=1/2.
Then ξ _i = ξ + η , ξ _k = -√(-1)(ξ - η ), h= 2ξη ,[h,ξ ]=ξ ,[h,η ]=-η.
The element aξ _π is a linear combination of elements aξ _π 'ξ and aξ _π 'η. We also have bξ _τ = ± bξ _τ 'ξ _i ξ _k=± 2bξ _τ 'ξη. Now,
[h,aξ _π 'ξ ]=± aξ _π 'ξ, [h,aξ _π 'η ] = ± aξ _π 'η, [h,bξ _τ 'ξη ]=0.
Hence (aξ _π | bξ _τ)=0 by Lemma 19.
* Finally, suppose that π∩τ = ∅ but π∪̇τ≠{ 1,… ,n }. Let
i∈{ 1,… ,n }∖ (π∪̇τ ). Consider again the elements ξ = 1/2(ξ _i + √(-1)ξ _k),η = 1/2(ξ _i - √(-1)ξ _k), h= 2ξη. Then the element bξ _τ is a linear combination of elements bξ _τ 'ξ, bξ _τ 'η, [h,bξ _τ 'ξ ]=± bξ _τ 'ξ, [h,bξ _τ 'η ]=± bξ _τ 'η, [h,aξ _π] = 0. Again by Lemma 19, (aξ _π | bξ _τ)=0.
This completes the proof of the lemma.
elsarticle-num
10
url<#>1urlprefixURL href#1#2#2 #1#115Tan1999TKKA
S. Tan, TKK algebras and vertex operator representations, Journal of
Algebra 211 (1) (1999) 298–342.
1Allison2000CentralEO
B. N. Allison, G. Benkart, Y. Gao, Central extensions of Lie algebras graded
by finite root systems, Mathematische Annalen 316 (2000) 499–527.
19Kassel1984KDC
J. Loday, Kähler differentiales and coverings of complex lie algebras extended
over a commutative ring, J. Pure and Applied Alg. 34 (1984) 265–275.
20Kassel1982EC
C. Kassel, J. Loday, Extensions centrales d'algèbres de lie, Ann. Inst.
Fourier, Grenoble 32 (1982) 119–142.
5Kac1988ClassSCA
V. G. Kac, J. W. van de Leur, On classification of superconformal algebras, in:
Strings '88 (College Park, MD, 1988), World Sci. Publ., Teaneck, NJ,
1989, pp. 77–106.
14Schur1904UDG
J. Schur, Über die Darstellung der endlichen Gruppen durch gebrochen
lineare Substitutionen, J. Reine Angew. Math. 127 (1904) 20–50.
3Garland1980TheAT
H. Garland, The arithmetic theory of loop groups, Publications
Mathématiques de l'Institut des Hautes Études Scientifiques 52 (1980)
5–136.
13Neher2003UCEL
E. Neher, An introduction to universal central extensions of Lie
superalgebras, in: Groups, rings, Lie and Hopf algebras (St. John's,
NF, 2001), Vol. 555 of Math. Appl., Kluwer Acad. Publ., Dordrecht, 2003,
pp. 141–166.
4Jacobson1968StructureJA
N. Jacobson, Structure and representations of Jordan algebras, Vol. Vol.
XXXIX of American Mathematical Society Colloquium Publications, American
Mathematical Society, Providence, RI, 1968.
12McCrimmon2004TJA
K. McCrimmon, A taste of Jordan algebras, Universitext, Springer-Verlag, New
York, 2004.
18Zhevlakov1882RNA
K. A. Zhevlakov, A. M. Slin' ko, I. P. Shestakov, A. I. Shirshov, Rings
that are nearly associative, Vol. 104 of Pure and Applied Mathematics,
Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New
York-London, 1982, translated from the Russian by Harry F. Smith.
10Zelmanov2009JSR
C. Martínez, E. Zelmanov, Jordan superalgebras and their representations,
in: Algebras, representations and applications, Vol. 483 of Contemp. Math.,
Amer. Math. Soc., Providence, RI, 2009, pp. 179–194.
17Tits1962AA
J. Tits, Algèbres alternatives, algèbres de Jordan et algèbres de Lie
exceptionnelles. I. Construction, Indag. Math. 28 (1966) 223–237,
nederl. Akad. Wetensch. Proc. Ser. A 69.
16Tits1962CJ
J. Tits, Une classe d'algèbres de Lie en relation avec les algèbres de
Jordan, Indag. Math. 24 (1962) 530–535, nederl. Akad. Wetensch. Proc. Ser.
A 65.
6Kantor1972GeneralizeJA
I. L. Kantor, Certain generalizations of Jordan algebras, Trudy Sem. Vektor.
Tenzor. Anal. 16 (1972) 407–499.
9Koecher1967ImbeddingJL
M. Koecher, Imbedding of jordan algebras into lie algebras. i, American Journal
of Mathematics 89 (1967) 787–816.
2Cantarini2007Classification
N. Cantarini, V. G. Kac, Classification of linearly compact simple Jordan and
generalized Poisson superalgebras, Journal of Algebra 313 (1) (2007)
100–124, special Issue in Honor of Ernest Vinberg.
7Kantor1990ConnectPJL
I. L. Kantor, Connection between Poisson brackets and Jordan and Lie
superalgebras, in: Lie theory, differential equations and representation
theory (Montreal, PQ, 1989), Univ. Montréal, Montreal, QC, 1990, pp.
213–225.
8Daniel1992TheKC
D. King, K. McCrimmon, The Kantor construction of Jordan superalgebras,
Communications in Algebra 20 (1992) 109–126.
11Zelmanov2019BSA
C. Martínez, E. Zelmanov, Brackets, superalgebras and spectral gap,
São Paulo J. Math. Sci. 13 (1) (2019) 112–132.
|
http://arxiv.org/abs/2409.03753v1 | 20240905175915 | WildVis: Open Source Visualizer for Million-Scale Chat Logs in the Wild | [
"Yuntian Deng",
"Wenting Zhao",
"Jack Hessel",
"Xiang Ren",
"Claire Cardie",
"Yejin Choi"
] | cs.CL | [
"cs.CL",
"cs.AI",
"cs.HC",
"cs.IR",
"cs.LG"
] |
[NO \title GIVEN]
[NO \author GIVEN]
September 9, 2024
======================
*Work done in large part while at the Allen Institute for Artificial Intelligence.footnote
§ ABSTRACT
The increasing availability of real-world conversation data offers exciting opportunities for researchers to study user-chatbot interactions. However, the sheer volume of this data makes manually examining individual conversations impractical. To overcome this challenge, we introduce , an interactive tool that enables fast, versatile, and large-scale conversation analysis.
provides search and visualization capabilities in the text and embedding spaces based on a list of criteria. To manage million-scale datasets, we implemented optimizations including search index construction, embedding precomputation and compression, and caching to ensure responsive user interactions within seconds. We demonstrate 's utility through three case studies: facilitating chatbot misuse research, visualizing and comparing topic distributions across datasets, and characterizing user-specific conversation patterns.
is open-source and designed to be extendable, supporting additional datasets and customized search and visualization functionalities.
§ INTRODUCTION
While hundreds of millions of users interact with chatbots like ChatGPT <cit.>, the conversation logs remain largely opaque for open research, limiting our understanding of user behavior and system performance.
Recently, initiatives such as WildChat <cit.> and LMSYS-Chat-1M <cit.> have released millions of real-world user-chatbot interactions, offering rich opportunities to study interaction dynamics.
However, the volume and complexity of these datasets pose significant challenges for effective analysis.
To help researchers uncover patterns and anomalies within these vast chat datasets, we introduce , an interactive tool for exploring million-scale chat logs. enables researchers to find conversations based on specific criteria, understand topic distributions, and explore semantically similar conversations, all while maintaining efficiency. <Ref> illustrates an example search using , applying criteria such as the keyword “Election,” conversations with more than two turns, and chats from users in Florida, among others.
features two main components: an exact, compositional filter-based retrieval system, which allows users to refine their search using ten predefined filters such as keywords, geographical location, IP address, and more. The second component is an embedding-based visualization module, which represents conversations as dots on a 2D plane, with similar conversations positioned closer together. Both components are designed to scale to millions of conversations. A preliminary version of the tool, which supported filter-based retrieval for one million WildChat conversations, was accessed over 18,000 times by 962 unique IPs in July and August 2024 alone. The latest release, described in this paper, extends support to both components for WildChat and LMSYS-Chat-1M.
In this paper, we present the design and implementation of , discussing the strategies employed to scale to million-scale datasets while maintaining latency within seconds. We also showcase several use cases: facilitating chatbot misuse research <cit.>, visualizing and comparing topic distributions between WildChat and LMSYS-Chat-1M, and characterizing user-specific conversation patterns. For example, reveals distinct topic clusters such as Midjourney prompt generation in WildChat and chemistry-related conversations in LMSYS-Chat-1M. Additionally, we observe that WildChat exhibits a generally more creative writing style compared to LMSYS-Chat-1M. As an open-source project, is available at https://github.com/da03/WildVisualizergithub.com/da03/WildVisualizer under an MIT license, and a working demo can be accessed at https://wildvisualizer.comwildvisualizer.com.
§ USER INTERFACE
consists of two primary pages—a filter-based search page and an embedding visualization page—along with a conversation details page. These pages are designed to provide users with both high-level overviews and detailed insights into individual conversations.
This example is available at <https://wildvisualizer.com/?contains=homework toxic=false language=English>.
§.§ Filter-Based Search Page
The filter-based search page (<Ref>) enables users to filter the dataset based on a list of criteria. Users can input keywords to retrieve relevant conversations or narrow down results using specific criteria. In total, ten predefined filters are available, including:
This example is available at <https://wildvisualizer.com/embeddings/english?contains=python>.
* Hashed IP Address: Filter conversations by hashed IP addresses to analyze interactions from the same user.[IP addresses are hashed to protect user privacy while still allowing the analysis of interactions associated with the same user.]
* Geographical Data: Filter by inferred state and country to gain insights into regional variations in conversational patterns.
* Language: Restrict results to conversations in specific languages.
* Toxicity: Include or exclude conversations flagged as toxic.
* Redaction Status: Include or exclude conversations with redacted personally identifiable information (PII).
* Minimum Number of Turns: Focus on conversations with a specified minimum number of turns.
* Model Type: Select conversations by the underlying language model used, such as GPT-3.5 or GPT-4.
The search results are displayed in a paginated table format, ensuring easy navigation through large datasets. Active filters are prominently displayed above the results and can be removed by clicking the “×” icon next to each filter.
Each result entry displays key metadata, including the conversation ID, timestamp, geographic location, hashed IP address, and model type. Users can interact with these results in multiple ways. Clicking on a conversation ID leads to a detailed view of that conversation. Additionally, all metadata fields, such as the hashed IP address, are clickable, enabling users to quickly search based on specific attributes. For example, clicking on a hashed IP address brings up a list of all conversations associated with that IP, facilitating user-specific analyses.
§.§ Embedding Visualization Page
In addition to traditional search capabilities, offers an embedding visualization page (<Ref>), which allows users to explore conversations based on their semantic similarity. Conversations are represented as dots on a 2D plane, with similar conversations placed closer together.
Basic Visualization Each conversation appears as a dot, with different datasets distinguished by color. Hovering over a dot reveals a preview of the conversation, and clicking on it navigates to the conversation details page.[On mobile devices, tapping a dot displays a preview with options to view the full conversation or close the preview. See <Ref> in <Ref> for a screenshot.] Users can zoom in, zoom out, and drag the view to explore different regions of the visualization. This spatial arrangement enables users to explore clusters of related conversations and identify structures within the data.
Filter-Based Highlighting Similar to the filter-based search page, users can apply filters to highlight specific conversations on the 2D map, with matching conversations marked in red. This feature helps users locate conversations of interest, such as identifying topics associated with a particular user.
Conversation Embedding To represent each conversation as a point in 2D space, we embed the first user turn of each conversation using OpenAI's text-embedding-3-small model.[We opted to embed only the first user turn, as preliminary experiments showed that embedding the entire conversation led to less intuitive clustering.] We then trained a parametric UMAP model <cit.> to project these embeddings into 2D space.[We chose parametric UMAP over t-SNE <cit.> to enable online dimensionality reduction, which will be discussed in <Ref>.] Since initial experiments showed that training a single UMAP model on all embeddings resulted in some clusters driven by language differences (see <Ref> in <Ref>), in order to create more semantically meaningful clusters, we also trained a separate parametric UMAP model for each language. Users can easily switch between different languages and their corresponding UMAP projections (<Ref> in <Ref>).
The combination of embedding visualization, filtering, highlighting, and interactive previews enables users to navigate vast amounts of conversation data, uncovering insights and connections that might otherwise remain hidden. For example, users can easily identify outliers and clusters.
§.§ Conversation Details Page
The conversation details page (<Ref> in <Ref>) provides a detailed view of individual conversations. This page displays all the turns between the user and the chatbot, along with associated metadata. Similar to the filter-based search page, all metadata fields are clickable, allowing users to apply filters based on their values. However, if users arrive at this page by clicking a dot on the embedding visualization page, the filtering will be applied within the embedding visualization context. A toggle switch on the conversation details page allows users to control which page (filter-based search or embedding visualization) clicking on metadata fields will direct them to.
§ SYSTEM IMPLEMENTATION
is designed to efficiently process large-scale conversational datasets.
§.§ System Architecture
operates on a client-server architecture, where the server handles data processing, search, and conversation embedding, while the client provides an interface for data exploration. The high-level system architecture is illustrated in <Ref>.
Users interact with the frontend web interface, which communicates their queries to the backend server. The backend server is built using Flask[<https://flask.palletsprojects.com/>], which processes these queries and constructs search requests for an Elasticsearch[<https://www.elastic.co/elasticsearch>] engine. Elasticsearch, known for its scalable search capabilities, retrieves the relevant conversations, which are then sent back to the frontend for rendering. The frontend is developed using HTML, CSS, and JavaScript[The frontend is built on top of MiniConf <cit.>.], with Deck.gl[<https://deck.gl/>] used for rendering large-scale, interactive embedding visualizations.
§.§ Scalability and Optimization
To manage the large volume of data and ensure smooth user interaction, uses several optimization strategies.
Search For search functionalities, an index is built for each dataset with all metadata using Elasticsearch, allowing the backend to efficiently retrieve relevant conversations. To reduce the load during queries with a large number of matches, we employ two strategies: pagination, which retrieves results one page at a time with up to 30 conversations per page, and limiting the number of retrieved matches to 10,000 conversations per search.
Embedding Visualization - Frontend Rendering a large number of conversation embeddings is computationally intensive for a browser, especially on mobile devices, and may lead to visual clutter with overlapping dots. To mitigate these issues, we use Deck.gl to render large numbers of points efficiently. Additionally, we restrict the visualization to a subset of 1,500 conversations per dataset, ensuring smooth rendering and clear visualization.
Embedding Visualization - Backend On the backend, computing embeddings for a large number of conversations can introduce significant delays. To address this, we precompute the 2D coordinates for the subset of conversations selected for visualization. These precomputed results are then compressed using gzip and stored in a file, which is sent to the user upon their first visit to the embedding visualization page. The compressed file is approximately 1 MB in size and only needs to be downloaded once.
Although we only display a subset of conversations, users may still need to search the entire dataset. To support this, we integrate the embedding visualization with the Elasticsearch engine. When a user submits a query, we first search within the displayed subset of conversations (with an index built for this subset). If sufficient matches are found within the subset (with a default threshold of 100, adjustable up to 1,000), we simply highlight them and do not extend the search further. However, if there are not enough matches, we extend the search to the entire dataset using Elasticsearch, retrieve the relevant conversations (up to the threshold number), and embed and project them into 2D coordinates before sending them to the frontend for visualization. To speed up this process, we cache all computed coordinates in an SQLite database. Due to the need to dynamically compute coordinates for conversations not found in the cache, we chose parametric UMAP over t-SNE, as t-SNE does not learn a projection function, whereas parametric UMAP allows for quick projection of new conversations into lower-dimensional space.
§.§ Performance Evaluation
To evaluate the efficiency of our system, we generated ten random keyword-based search queries and measured the execution time for each using our tool. On the filter-based search page, each query took an average of 0.47 seconds (±0.06s). In comparison, a naive for-loop-based approach using the HuggingFace Datasets library took 1148.89 seconds (±25.28s). For embedding visualization, the same measurement method was used, and each query took an average of 0.43 seconds (±0.01s).
§ USE CASES
This section presents several use cases that demonstrate the potential of . It is important to note that is designed primarily for exploratory data analysis rather than for final quantitative analysis.
Data currently supports two datasets: WildChat <cit.> and LMSYS-Chat-1M <cit.>. These datasets are integrated into the system by building Elasticsearch indices and precomputing the 2D coordinates of a randomly selected subset of conversations for embedding visualization.
§.§ Facilitating Chatbot Misuse Research
One application of is in facilitating studies on chatbot misuse. We show here that is able to both reproduce existing studies on chatbot misuse and to discover new misuse cases.
Reproducing a Study on Journalist Misuse In this use case, we replicate the findings of <cit.>, which identified instances of journalists misusing the chatbot behind WildChat to paraphrase existing articles for their work. To locate a specific instance mentioned in the study, we use the following quote from the original research:
write a new article out of the information in this article, do not make it obvious you are taking information from them but in very sensitive information give them credit.
These examples can be found at <https://wildvisualizer.com/embeddings/english?contains=python>, <https://wildvisualizer.com/embeddings/english?contains=email>, <https://wildvisualizer.com/embeddings/english?contains=story>, and <https://wildvisualizer.com/embeddings/english?contains=how%20many>.
To find this conversation, we enter the phrase you are taking information from them in the “Contains” field on the search page and execute the search.[This case can be found at <https://wildvisualizer.com/?contains=you%20are%20taking%20information%20from%20them>.] The search returns a single result, matching the case mentioned in the original paper. By clicking on the hashed IP address, we can view all conversations from this user, identifying all 15 conversations analyzed in the original study <cit.>.
Reproducing a Study on User Self-Disclosure In another example, we replicate findings from a study on user self-disclosure behaviors by <cit.>. We search for a key phrase from that paper: I have invited my father.[This case can be found at <https://wildvisualizer.com/?contains=I%20have%20invited%20my%20father>.] Again, the search returns a single result, allowing us to find the conversation discussed in the study.
Discovering Additional Misuse Cases also facilitates the discovery of additional misuse cases. For instance, by searching for conversations that contain both personally identifiable information (PII) and the term “Visa Officer”[<https://wildvisualizer.com/?contains=Visa%20Officer redacted=true>], we identified multiple entries from the same IP address. Further filtering based on this IP address revealed that the user appears to be affiliated with an immigration service firm and has disclosed sensitive client information.[<https://wildvisualizer.com/?hashed_ip=048b169ad0d18f2436572717f649bdeddac793967fb63ca6632a2f5dca14e4b8>]
§.§ Visualizing and Comparing Topics
A powerful feature of the embedding visualization page in is its ability to visualize the overall distribution of topics, with conversations of similar topics positioned close to each other. In our previous discussion on embedding conversations, we illustrated language-specific clusters (<Ref> in <Ref>). As another example, for English data, this visualization reveals that the embedding space can be roughly divided into four regions: coding (by searching for “python”), writing assistance (by searching for “email”), story generation (by searching for “story”), and math question answering (by searching for “how many”), as illustrated in <Ref>. This observation aligns with the findings in <cit.>.
This feature also allows for the comparison of topic distributions across different datasets. By inspecting regions with different colors, users can identify outliers, regions where one dataset is well-represented while the other is not, and areas where both datasets overlap. By hovering over these regions, patterns in the types of conversations can be observed. For example, we found that WildChat contains more conversations related to creating writing and an outlier cluster of Midjourney prompt generation (see <Ref>) compared to LMSYS-Chat-1M, while LMSYS-Chat-1M has outlier clusters of conversations about chemistry (see <Ref>).
§.§ Characterizing User-Specific Patterns
can also be used to visualize the topics of all conversations associated with a specific user on the embedding map. For example, <Ref> displays all conversations of a single user, revealing two main topic clusters: coding-related and email writing-related.
§ RELATED WORK
HuggingFace Dataset Viewer HuggingFace's Dataset Viewer <cit.>[<https://huggingface.co./docs/dataset-viewer/en/index>] provides basic search functionalities for datasets hosted on HuggingFace. However, it is designed for general dataset visualization and is not specifically tailored for conversational datasets. For example, while it offers useful statistics, navigating JSON-formatted conversations in a table format can be cumbersome and lacks the intuitive visualization needed for exploring conversational data.
Paper Visualization Tools The ACM Fellows' Citation Visualization tool[<https://mojtabaa4.github.io/acm-citations/>] embeds ACM Fellows based on their contribution statements. While its interface shares many similarities with the embedding visualization page of , it focuses on publication data rather than conversational data. Another relevant work is <cit.>, which visualizes papers in a similar manner, with an added conversational component that allows users to interact with the visualizations by asking questions. However, it is also primarily designed for academic papers rather than large-scale chat datasets.
Browser Tools for Chat Visualization Several browser-based tools exist for chat visualization, such as ShareGPT[<https://sharegpt.com>], which allows users to share their conversations. However, ShareGPT lacks support for searching large-scale chat datasets. Similarly, browser extensions like ShareLM[<https://chromewebstore.google.com/detail/nldoebkdaiidhceaphmipeclmlcbljmh>] enable users to upload and view their conversations, and ChatGPT History Search[<https://chatgpthistorysearch.com/en>] offers search functionality for a user's personal conversations. However, these tools are not designed for the exploration or analysis of large-scale chat datasets.
Large-scale Data Analysis Tools
Specialized tools like ConvoKit <cit.> provide a framework for analyzing dialogue data. In comparison, is designed to offer an intuitive interface for interactively exploring and visualizing chat datasets. This makes particularly useful for preliminary data exploration and hypothesis generation. Another notable tool, WIMBD <cit.>, supports the analysis and comparison of large text corpora, offering functionalities such as searching for documents containing specific queries and counting statistics like n-gram occurrences. Although WIMBD can handle larger datasets, offers additional features, such as embedding visualization, providing a more comprehensive toolkit for chat dataset exploration.
§ CONCLUSION
In this paper, we introduced , an interactive web-based tool designed for exploring large-scale conversational datasets. By combining powerful search functionalities with intuitive visualization capabilities, enables researchers to uncover patterns and gain insights from vast collections of user-chatbot interactions. The system's scalability optimizations ensure efficient handling of million-scale datasets, while maintaining a responsive and user-friendly experience.
fills a gap in existing tools by providing a specialized platform for visualizing and exploring chat datasets, which are inherently challenging to analyze using generic dataset viewers. Our use cases demonstrate the tool's potential to replicate and extend existing research on chatbot misuse and user self-disclosure, as well as to facilitate topic-based conversation exploration.
§ ACKNOWLEDGMENTS
This work is supported by ONR grant N00014-24-1-2207, NSF grant DMS-2134012, and an NSERC Discovery grant. We also thank Bing Yan, Pengyu Nie, and Jiawei Zhou for their valuable feedback.
§ EMBEDDING VISUALIZATION ON MOBILE DEVICES
<Ref> shows a screenshot of the embedding visualization page on mobile devices. Since mobile devices do not support hover interactions, we adapted the interface by using a tap gesture for displaying previews. Additionally, a button is provided to view the full conversation, replacing the click action used on desktop devices.
§ LANGUAGE-SPECIFIC CLUSTERS
When visualizing all conversations together on the embedding visualization page, clusters based on language emerge, such as the Spanish, Chinese, and Russian clusters in <Ref>.
§ SWITCHING EMBEDDING VISUALIZATION LANGUAGE
<Ref> shows a screenshot of switching the embedding visualization language. This will load a subset of conversations in the selected language only and utilize the corresponding trained parametric UMAP model to embed conversations.
§ CONVERSATION DETAILS PAGE
<Ref> shows a screenshot of the conversation details page, where all metadata fields are displayed alongside the dialogue content. Clicking any metadata field will filter the conversations based on the selected value. Depending on how the user navigated to this page—either from the filter-based search page or the embedding visualization page—the filtering action will redirect the user back to the respective page. A toggle switch at the top allows users to control this behavior.
§ VISUALIZING AND COMPARING TOPIC DISTRIBUTIONS
The embedding visualization highlights distinct outlier clusters in the dataset. One notable cluster in the WildChat dataset involves Midjourney prompt engineering, where users ask the chatbot to generate detailed prompts for use with Midjourney, as shown in <Ref> (this phenomenon was also noted by <cit.>). Another distinct outlier cluster comprises chemistry-related questions in LMSYS-Chat-1M, illustrated in <Ref>.[https://future-xy.github.io/Yao Fu discovered this phenomenon and shared it with the authors.]
§ CHARACTERIZING USER-SPECIFIC PATTERNS
can be used to visualize the topics of all conversations associated with a specific user on the embedding map. For example, <Ref> displays all conversations from a single user, revealing two main topic clusters: coding-related and email writing-related.
|
http://arxiv.org/abs/2409.03673v1 | 20240905162622 | Crystal-field magnetostriction of the spin ice under ultrahigh magnetic fields | [
"Nan Tang",
"Masaki Gen",
"Martin Rotter",
"Huiyuan Man",
"Kazuyuki Matsuhira",
"Akira Matsuo",
"Koichi Kindo",
"Akihiko Ikeda",
"Yasuhiro H. Matsuda",
"Philipp Gegenwart",
"Satoru Nakatsuji",
"Yoshimitsu Kohama"
] | cond-mat.str-el | [
"cond-mat.str-el"
] |
[email protected]
Institute for Physics, University of Augsburg, Augsburg 86159, Germany
[email protected]
Institute for Solid State Physics, University of Tokyo, Kashiwa, Chiba 277-8581, Japan
*These authors contributed equally to this work.
McPhase Project, Dresden 01159, Germany
Geballe Laboratory for Advanced Materials, Stanford University, CA 94305, USA
Faculty of Engineering, Kyushu Institute of Technology, Kitakyushu, Fukuoka 804-8550, Japan
Institute for Solid State Physics, University of Tokyo, Kashiwa, Chiba 277-8581, Japan
Institute for Solid State Physics, University of Tokyo, Kashiwa, Chiba 277-8581, Japan
Institute for Solid State Physics, University of Tokyo, Kashiwa, Chiba 277-8581, Japan
Department of Engineering Science, University of Electro-Communications, Chofu, Tokyo 182-8585, Japan
Institute for Solid State Physics, University of Tokyo, Kashiwa, Chiba 277-8581, Japan
Institute for Physics, University of Augsburg, Augsburg 86159, Germany
Institute for Solid State Physics, University of Tokyo, Kashiwa, Chiba 277-8581, Japan
Department of Physics, University of Tokyo, Tokyo 113-0033, Japan
Institute for Quantum Matter and Department of Physics and Astronomy, Johns Hopkins University, Baltimore, Maryland 21218, USA
Trans-scale Quantum Science Institute, University of Tokyo, Tokyo 113-0033, Japan
Institute for Solid State Physics, University of Tokyo, Kashiwa, Chiba 277-8581, Japan
§ ABSTRACT
We present a comprehensive study of the magnetoelastic properties of the Ising pyrochlore oxide , known as spin ice, by means of high-field magnetostriction measurements and numerical calculations.
When a magnetic field is applied along the crystallographic ⟨ 111 ⟩ axis, the longitudinal magnetostriction exhibits a broad maximum in the low-field regime around 30 T, followed by a dramatic lattice contraction due to crystal-field (CF) level crossing at B_ cf∼ 65 T.
The transverse magnetostriction exhibits a contrasting behavior, highlighting the anisotropic nature of the CF striction.
We identify distinct timescales of spin dynamics and CF-phonon dynamics by applying a magnetic field with different field-sweep rates.
Our mean-field calculations, based on a point-charge model, successfully reproduce the overall magnetostriction behavior, revealing the competition between the exchange striction and CF striction.
A signature of the CF level crossing is also observed through adiabatic magnetocaloric-effect measurements, consistent with our magnetostriction data.
Crystal-field magnetostriction of the spin ice under ultrahigh magnetic fields
Yoshimitsu Kohama
September 9, 2024
==============================================================================
§ INTRODUCTION
Magnetostriction refers to the distortion of a lattice when a material is subjected to an external magnetic field.
Certain magnetostrictive materials exhibit relative length change Δ L/L as large as several percent <cit.>.
These materials have been extensively studied for their practical applications in developing multifunctional devices, such as actuators, oscillators, and sensors <cit.>.
Beyond their applicative aspects, magnetostriction serves as a key physical quantity for probing phase transitions as well as underlying spin correlations.
For instance, magnetostriction measurements on the quantum spin ice revealed a sharp first-order transition at extremely low temperatures, shedding light on a liquid-gas metamagnetic transition and offering insights into spin-orbital liquids <cit.>.
Magnetostriction measurements on the Shastry–Sutherland magnet SrCu_2(BO_3)_2 uncovered a series of spin superstructure phases under ultrahigh magnetic fields <cit.>, which are characterized by devil's-staircase-like magnetization plateaus <cit.>.
There are several well-known microscopic origins of magnetostriction <cit.>.
One such mechanism involves the dependence of an ionic radius on its electronic configuration.
A field-induced change in the ionic state can result in significant volume magnetostriction, as exemplified by a spin-state transition in LaCoO_3 <cit.> and valence transitions in f electron-based valence-fluctuating materials <cit.>.
In contrast to these on-site mechanisms, exchange striction refers to the change in distance between two magnetic ions to minimize exchange coupling energy [Fig. <ref>(a)].
Here, Δ L/L reflects changes in local spin correlations ⟨ S_i· S_j⟩ between neighboring sites i and j, and consequently correlates with the magnetization M <cit.>.
The crystal field (CF) also influences the arrangement of surrounding anions and the wave function of the magnetic site to minimize the Coulomb energy.
Applying a magnetic field alters the wave function through CF level hybridization, potentially causing anisotropic lattice deformation, known as CF striction [Fig. <ref>(b)].
Although magnetostriction is universally observed in all magnetic materials, its theoretical description is far from straightforward and is often simplified by phenomenological approaches <cit.> or effective magnetoelastic models <cit.>.
The target compound in this study is , one of the most extensively studied 4f Ising pyrochlore magnet, known as a classical spin ice <cit.>.
The strong Ising anisotropy towards the ⟨ 111 ⟩ axes arises from a combination of dominant nearest-neighbor ferromagnetic interactions and the CF effect.
The strong geometrical frustration results in a highly degenerate ground state, characterized by short-range spin correlations where two spins point inward and two outward within each tetrahedron (“2-in–2-out") <cit.> [Fig. <ref>(c)].
When a magnetic field is applied along the [111] direction, a metamagnetic transition occurs at approximately 2 T, flipping one spin and resulting in a “3-in-1-out” (or “1-in-3-out”) configuration in the tetrahedra, representing a fully polarized state within the Ising limit [Fig. <ref>(c)].
In this field orientation, each tetrahedron has one magnetic site with the easy axis parallel to B (the “easy-axis (EA) site”) and three sites with axes not parallel to B (the “hard-axis (HA) sites”).
Given the strong spin-orbit coupling intertwining spin and orbital degrees of freedom, the CF-phonon interaction is crucial for the magnetoelastic properties.
However, as the easy-axis directions vary at the four magnetic sites in , local lattice deformations induced by the CF tend to cancel out, making its effect on the crystal structure less evident.
To clarify this, the observation of the CF striction in high magnetic fields would be useful.
In , the energy gap between the lowest and the first-excited CF levels is approximately 260 K in zero field <cit.>.
The application of an external magnetic field along [111] can hybridize these CF levels at the HA sites, as shown in the CF level diagram in Fig. <ref>(d), eventually realizing a forced ferromagnetic (“all-up") state [Fig. <ref>(c)].
The CF level crossing is theoretically expected at B_ cf^ cal = 67 T based on the exact diagonalization of the CF Hamiltonian.
Indeed, Erfanifam et al. observed a signature of the CF level hybridization for B ∥ [111] through magnetization and ultrasound measurements up to 60 T <cit.>, and Opherden et al. observed a sequence of metamagnetic transitions accompanied by the CF level crossing for B ∥ [5513] up to 120 T <cit.>.
Here, we investigate the CF striction in using magnetostriction measurements under ultrahigh magnetic fields up to 120 T, complemented by magnetization and magnetocaloric-effect (MCE) measurements.
We observe significant anisotropic CF striction, characterized by a lattice contraction amounting to Δ L/L ∼ -5 × 10^-4 along the field direction.
Our mean-field calculations, based on a point-charge model and incorporating CF, two-ion exchange interactions, phonons, and CF-phonon coupling, successfully reproduce the magnetostriction data.
Similar high-field magnetostriction is observed in another spin ice compound, , suggesting that the CF striction is a common feature in rare-earth-based spin-ice systems.
§ METHODS
§.§ Experiments
Single crystals of and were grown from polycrystalline feed rods using a high-temperature xenon-type optical floating zone furnace.
Crystal orientations were checked using a backscattering Laue x-ray diffractometer.
Several pieces of as-grown crystals were cut and polished into rectangular parallelepipeds with dimensions of approximately 1 × 1 × 3 mm^3.
We performed magnetization and magnetostriction measurements in the field orientation B ∥ [111].
Magnetization up to 7 T was measured using a superconducting quantum interference device (MPMS, Quantum Design).
Magnetization up to 62 T and 130 T was measured by the induction method in a nondestructive short-pulsed magnet (4 ms duration) and in a horizontal single-turn-coil (HSTC) system (8 μs duration) <cit.>, respectively.
Longitudinal magnetostriction (Δ L/L ∥ [111]) up to 55 T and 118 T was measured by the fiber-Bragg-grating (FBG) method in a nondestructive long-pulsed magnet (36 ms duration) and in the HSTC system, respectively.
Transverse magnetostriction (Δ L/L ⊥ [111]) up to 55 T was also measured in the nondestructive long-pulsed magnet, where the optical fiber was bent by 90 degrees, as in Refs. <cit.>.
A relative sample-length change Δ L/L was detected by the optical filter method <cit.>.
The optical fiber was glued on the flat surface of the crystal using Stycast1266.
In the nondestructive pulsed magnet, the magnetization and magnetostriction measurements were performed in either liquid-^4He or gas ^4He environment, i.e., nonadiabatic conditions.
In the HSTC system, the sample was cooled to approximately 5 K using a liquid-^4He flow-type cryostat, where the measurement condition should be quasi-adiabatic because of the short pulsed-field duration.
To monitor the temperature change of the sample during the field sweep, we measured the magnetocaloric effect (MCE) up to 55 T under both nonadiabatic and quasi-adiabatic conditions using the nondestructive long-pulsed magnet <cit.>.
A sensitive Au_16Ge_84 film thermometer was sputtered on the surface of the crystal, with temperature calibration performed using a commercial RuO_2 thermometer.
Table <ref> summarizes the measurement conditions for all experiments conducted under pulsed magnetic fields.
The typical magnetic-field profiles of the three types of magnets used in the experiments are shown in Fig. <ref> in Appendix <ref>.
§.§ Theoretical calculations
To capture the CF level scheme in , we first consider a single-ion Hamiltonian that includes both the CF and Zeeman terms. The CF Hamiltonian can be expressed as follows:
Ĥ_ cf = ∑_lm B_lm O_lm
where B_lm denotes the CF parameters, and O_lm represents Stevens Operator equivalents.
The single-ion Hamiltonian is then written as
Ĥ_ single-ion = Ĥ_ cf-g_Jμ_ BĴ_i· B,
where g_J is the Landé's g factor, μ_ B is the Bohr magneton, Ĵ_i is the magnetic moment at site i, and B is the external magnetic field.
We obtain the field evolution of the CF level energy at the HA site [Fig. <ref>(d)] by exact diagonalization of eq. (<ref>) <cit.>.
For detailed analyses of magnetization and magnetostriction, we additionally take into account magnetic interactions, phonons, and CF-phonon coupling.
The magnetic properties of can be accurately described by the classical dipolar spin ice model, incorporating dominant long-range dipolar interaction J_ dipo and short-range nearest-neighbor exchange interaction J_ ex.
As the dipolar interaction J_ dipo can be effectively renormalized into the nearest-neighbor exchange interaction <cit.>, we consider a two-spin exchange coupling term described by
Ĥ_ ex =-1/2∑_ij J(R_ij)Ĵ_iĴ_j,
where the sum is taken over all nearest-neighbor pairs, J(R_ij) ≡ J_ dipo + J_ ex is the distance-dependent effective exchange coupling, and R_ij is the vector connecting sites i and j.
Assuming that J is linearly modulated by a slight bond-length change ϵ, we expand J(R_ij) as,
J (R_ij) ≈ J (R_ij^0) + ∑ _α, β, γ∂ J/∂ R^α∂ϵ_αγR_ij^γ/∂ϵ_βϵ_β,
where R_ij^0 is the distance between the original positions of sites i and j, and the indices α, β, γ = x, y, z refer to an Euclidean coordinate system parallel to the crystal axes, i.e., x ∥ a, y ∥ b and z ∥ c.
For the lattice contribution, we consider a phonon term, Ĥ_ ph, and a CF-phonon term, Ĥ_ cfph, which represents the coupling between the CF and phonons.
The detailed formulas for these two terms are provided in Appendix <ref>.
Combining all these terms, we construct the total Hamiltonian as
Ĥ_ total = ∑ _iĤ_ single-ion + Ĥ_ ex + Ĥ_ ph + Ĥ_ cfph.
We performed numerical simulations on Eq. (<ref>) using the program <cit.>, which specializes in calculating the physical properties of 4f-based magnets within the framework of mean-field theory <cit.>.
employs a point-charge model to determine the CF wavefunctions and CF energy gap <cit.>.
To match the experimental energy gap of approximately 260 K <cit.>, we apply a scaling factor of 0.7 to the nominal charges Ho (3+), Zr (4-), O (2-). To set the initial state in zero field, we utilized the crystallographic parameters of reported in Ref. <cit.>.
We set the dipolar interaction to J_ dipo = 2.4 K <cit.> and the nearest-neighbor exchange interaction to J_ ex = 0.1 K, yielding a total effective exchange coupling of J = 2.5 K.
This set of parameters successfully reproduces both the metamagnetic transition in the low-field regime and the CF level crossing at B_ cf∼ 65 T, as experimentally observed.
We note that even a small change of 0.1 K in J_ ex results in a 6 T shift in B_ cf^ cal.
In Eq. (<ref>), we assume dJ/dR ≃ dJ_ ex/dR, which is determined by the Bethe-Slater curve, as shown in Fig. <ref> in Appendix <ref>.
To calculate the magnetization, we solve the mean-field equations self-consistently to obtain the magnetization at each sublattice i, which is given by
M_i = ∑_ng_J μ_ B⟨ n |Ĵ_̂î| n ⟩/Zexp(-E_n/k_ BT),
where Z = ∑_nexp(-E_n/k_ BT) is the partition function for sublattice i, and | n⟩ is the eigenstate corresponding to the eigenvalue E_n of Ĥ_ single-ion+Ĥ_ ex.
The total magnetization M is obtained by averaging the magnetization of all sublattices, in the unit of μ_ B/Ho.
For the magnetostriction, we calculate strains self-consistently based on Eq. (<ref>) and the following equation:
∑_β=1,...,6c^αβϵ_β =∑_i,δ=1,2,3G^αδ_ mix(i)⟨ u^δ_i ⟩
+ ∑_i,lmG_ cfph^α lm(i)⟨ O_lm(Ĵ_̂î)⟩
+ 1/2∑_i,i',δ,δ',α',γ∂ J_δδ' ( R_ii')/∂ R^α'∂ϵ_α ' γ∂ R^γ_ii'/∂ϵ_α⟨Ĵ_iδĴ_i'δ'⟩,
where i and i' denote atomic site positions, while δ, δ', and α', γ run over the spatial coordinates x, y, and z, represented as the integers 1, 2, and 3, respectively.
c^αβ denotes the elastic constant, where α and β are indices in Voigt notaion, ranging from 1 to 6.
G_ mix^αδ represents the phonon-strain coupling, and G_ cfph^α lm represents the CF-phonon coupling.
For details of these parameters, see Appendix <ref>.
In Eq. (<ref>), the first and second terms on the right side contribute to the CF striction, and the third term is the exchange striction.
The total lattice strain, i.e., magnetostriction, can be obtained by
Δ L/L = ∑_ijϵ_ijl_il_j,
where l denotes the unit vector in the direction of measurement.
Strain ϵ_ij is a 3 × 3 symmetric tensor, hence has 6 independent matrix elements: ϵ_xx, ϵ_yy, ϵ_zz, ϵ_xy, ϵ_xz, ϵ_yz.
We obtain the longitudinal magnetostriction (Δ L/L ∥ [111]) by calculating Δ L/L=(ϵ_xx+ϵ_yy+ϵ_zz+ 2(ϵ_xy+ϵ_xz+ϵ_yz))/3. In this study, we use Voigt notation, where ϵ_ij is relabeled as ϵ_1=ϵ_xx, ϵ_2=ϵ_yy, ϵ_3=ϵ_zz, ϵ_4=2ϵ_yz=2ϵ_zy, ϵ_5=2ϵ_xz=2ϵ_zx, and ϵ_6=2ϵ_xy=2ϵ_yx.
§ RESULTS AND DISCUSSION
§.§ High-field magnetization and magnetostriction
Figure <ref>(a) shows the magnetization curve of for B ∥ [111] measured up to 130 T at an initial temperature of T_ ini = 5 K using the HSTC system. The M–B curve measured up to 62 T at T_ ini = 4.2 K using the nondestructive pulsed magnet is also displayed, which reproduces Ref. <cit.>. Both M–B curves exhibit a plateau-like feature between 20 and 60 T, followed by a metamagnetic increase, where the field derivative of magnetization dM/dB exhibits a broad peak at B_ cf∼ 65 T for the field-down sweep [see the inset of Fig. <ref>(a)].
This metamagnetic behavior suggests the CF level crossing at the HA sites, consistent with the CF scheme shown in Fig. <ref>(d). Above B_ cf, M gradually approaches the expected full moment, 10 μ_ B/Ho, suggesting the realization of the “all-up” state.
We note that the sensitivity of magnetization detection decreases as dB/dt approaches zero, which may introduce errors in the absolute value of M near the maximum field.
Figure <ref>(b) shows the longitudinal magnetostriction curve for B ∥ [111] measured up to 120 T at T_ ini = 5 K using the HSTC system, along with the curve measured up to 55 T at T_ ini = 4.2 K using the nondestructive pulsed magnet. Initially, Δ L/L increases and reaches a maximum around 30 T. Subsequently, Δ L/L reverses and dramatically decreases from 30 to 80 T, where the field derivative of the magnetostriction d(Δ L/L)/dB exhibits a peak at B_ cf∼ 65 T for the field-down sweep [see the inset of Fig. <ref>(b)].
Finally, Δ L/L is nearly saturated at ∼80 T.
The observed change in sample length across B_ cf, amounting to Δ L/L ∼ 5 × 10^-4, highlights the strong CF effect on the crystal structure of .
§.§ Magnetocaloric effect (MCE)
Comparing the three M–B curves shown in Fig. <ref>(a), it is evident that the initial rise in magnetization at low magnetic fields becomes more gradual as the field sweep becomes faster.
This trend could be attributed to two factors: sample heating and slow spin relaxation.
We will discuss the spin dynamics in Sec. <ref>.
To verify the sample heating, we performed MCE measurements for B ∥ [111] up to 55 T in the nondestructive pulsed magnet.
The field dependence of the sample temperature T(B), measured at various initial temperatures (T_ ini's) under quasi-adiabatic conditions, are shown by the blue curves in Fig. <ref>.
The T(B) curves for field-up and field-down sweeps overlap for all the measured T_ ini's.
The absence of hysteresis ensures that nearly adiabatic conditions were achieved.
These T(B) curves can hence be regarded as isentropic, meaning that the total entropy, comprising both lattice entropy and magnetic entropy, remains constant.
For T_ ini = 5 K, T(B) approaches 30 K at 30 T, indicating a significant reduction in magnetic entropy <cit.>.
This occurs because the highly degenerate “2-in–2-out" states are destroyed by a magnetic field as weak as 2 T, stabilizing the nondegenerate “3(1)-in–1(3)-out" state.
At higher initial temperatures, the increase in T(B) is less pronounced due to greater thermal fluctuations and larger lattice heat capacity.
After T(B) reaches its maximum around 35 T, the sample gradually cools as the magnetic field increases up to 55 T.
This cooling behavior indicates an increase in magnetic entropy, suggesting the onset of CF level hybridization.
We also measured the sample temperature changes at T_ ini = 4.2 K in a liquid-^4He environment, i.e., under nonadiabatic conditions, as shown by the red curve in Fig. <ref>.
Surprisingly, T(B) reaches 28 K during the field-up sweep, which is only slightly lower than in the quasi-adiabatic case.
These MCE results indicate that sample heating effects are unavoidable when measuring under pulsed high magnetic fields.
We infer that, in the magnetization and magnetostriction data presented in Fig. <ref>, the sample temperature is likely between 20 and 30 K at fields above 10 T.
§.§ Spin dynamics
Our high-field data also provide valuable insights into the dynamical properties of the magnetic state.
As for the experimental data obtained in the millisecond-duration pulsed magnetic fields [Figs. <ref>(a) and <ref>(b)], both the M–B curve up to 62 T and the Δ L/L–B curve up to 55 T exhibit small hysteresis (Fig. <ref>).
These hysteresis are likely due to the lower sample temperature during the field-down sweep compared to the field-up sweep, as suggested by our nonadiabatic MCE data (Fig. <ref>).
We note that the hysteresis loop in the Δ L/L–B curve cannot be attributed to slow spin dynamics because the sign of the hysteresis is opposite to the delayed response.
This suggests that the timescale of the spin dynamics is faster than the millisecond range.
In contrast, the M–B curve up to 130 T, obtained in the microsecond-duration pulsed magnetic fields [Fig. <ref>(c)], exhibits significant hysteresis [Fig. <ref>(a)].
Given that no hysteresis is observed in the quasi-adiabatic MCE (Fig. <ref>), the observed hysteresis can be attributed to slow spin relaxation, which originates from nonequilibrium processes of monopole formation or annihilation <cit.>.
Notably, the hysteresis observed in the Δ L/L–B curve up to 120 T is much less pronounced compared to that in the M–B curve (Fig. <ref>).
This suggests that the relaxation time associated with CF striction, which is dominant in the high-field magnetostriction, is likely faster than the microsecond range.
This timescale is consistent with previous studies based on AC susceptibility, neutron spin echo, and μSR experiments <cit.>.
§.§ Temperature dependence and anisotropic magnetostriction
We also investigate the anisotropy and temperature dependence of the magnetostriction.
Figures <ref>(a) and <ref>(b) show the longitudinal and transverse magnetostriction of , respectively, for B ∥ [111] measured at various T_ ini's in the nondestructive pulsed magnet.
As T_ ini increases, the position of the hump in the longitudinal magnetostriction curve shifts to a lower field, and eventually, the hump disappears at T_ ini = 60 K.
In contrast to the longitudinal magnetostriction, the transverse magnetostriction at T_ ini = 4.2 K exhibits a broad dip around 30 T.
As the temperature increases to T_ ini = 40 K, this dip structure disappears, and Δ L/L begins to increase steadily with the magnetic field.
The anisotropic behavior between the longitudinal and transverse magnetostricition at high magnetic fields is the characteristic of the CF striction.
§.§ Numerical calculations
To theoretically understand the behavior of M and Δ L/L in high magnetic fields, we performed numerical calculations based on the mean-field approach for the Hamiltonian in Eq. (<ref>).
Figures <ref>(a) and <ref>(b) show the calculated M–B and Δ L/L–B curves, respectively, for B ∥ [111] at various temperatures.
By adopting the “2-in–2-out” spin-ice structure as the initial state in zero field, the metamagnetic transition to the “3(1)-in–1(3)-out” state is reproduced at 5 K, as shown in the inset of Fig. <ref>(a).
At low temperatures below 20 K, a plateau-like feature appears between 10 and 50 T, followed by another metamagnetic transition at approximately 65 T induced by the CF level crossing.
These trends are qualitatively consistent with the experimentally observed M–B curve [Fig. <ref>(a)].
The less pronounced plateau-like feature and broader metamagnetic behavior in the experimental curve can be attributed to the MCE and slow spin relaxation, as mentioned in Secs. <ref> and <ref>.
Having Confirmed that our simulations accurately describe the magnetization of across the entire field range, we now turn to the longitudinal magnetostriction.
According to Eq. (<ref>), the total magnetostriction is composed of the CF striction (Δ L_ cf/L) and exchange striction (Δ L_ ex/L).
The field dependence of Δ L_ cf/L and Δ L_ ex/L at 5 K is plotted in the inset of Fig. <ref>(b).
The combination of a positive change in Δ L_ ex/L in the low-field regime and a negative change in Δ L_ cf/L in the high-field regime results in a hump structure in the total magnetostriction curve.
Notably, the rounded shape of the experimentally observed hump in the Δ L/L–B curve at T_ ini = 5 K [Fig. <ref>(b)] is presumably due to sample heating, which is consistent with the calculated Δ L/L–B curve at 20 K.
Our calculations predict that the hump in Δ L/L shifts to a lower magnetic field with increasing temperature and eventually disappears above 60 K.
This behavior also agrees well with the experimental observations, as shown in Fig. <ref>(a).
§.§ Prevalence of CF striction in spin-ice compounds
We have revealed the CF striction in through both experimental and theoretical approaches.
Applying a magnetic field along the [111] direction leads to the hybridization of the lowest and first-excited CF levels around B_ cf, which is associated with negative longitudinal magnetostriction and positive transverse magnetostriction.
To explore whether the CF striction is a common feature of spin-ice systems, we investigated high-field magnetostriction in another spin ice compound, .
The introduction of rare-earth ions with smaller magnetic moments, such as Pr (∼3 μ_ B), can modify the spin-ice state, making it less Ising-like and more susceptible to quantum effects.
In particular, Pr_2(Zr, Sn, Hf, Ir)_2O_7 are proposed as quantum spin ices <cit.>.
The lowest CF level of consists of 90–96% of |J=± 4 ⟩ <cit.> (smaller compared to 98% of |J=± 8 ⟩ in ), indicating the presence of substantial transverse fluctuations.
Figures <ref>(a) and <ref>(b) display the M–B and Δ L/L–B curves for at T_ ini = 5 K (or 4.2 K), respectively.
No hysteresis is observed, consistent with the reported fast spin dynamics on the picosecond timescale <cit.>.
Unlike , the magnetization shows a monotonic increase without any plateau-like features, suggesting that does not exhibit a well-defined Ising limit state, and the low-energy CF levels gradually hybridize as the magnetic field increases.
Notably, the longitudinal magnetostriction in decreases monotonically, while the transverse magnetostriction increases.
This anisotropic lattice deformation is similar to that observed in (Fig. <ref>), suggesting that the similar CF environments in and lead to qualitatively identical CF striction effects.
We note that the sign of the magnetostriction in the low-field regime for is opposite to that for .
This difference is likely due to the opposite signs of dJ/dR for (dJ/dR > 0) and (dJ/dR < 0).
§ CONCLUSION
We have thoroughly investigated the static and dynamic magnetoelastic properties in the spin-ice compound Ho_2Ti_2O_7 through magnetostriction measurements under ultrahigh magnetic fields.
Our observations reveal a large anisotropic magnetostriction occurring at the CF level crossing around B_ cf∼ 65 T, where the longitudinal magnetostriction amounts to Δ L/L ∼ -5 × 10^-4. This is an order of magnitude larger than the exchange striction observed in the low-field regime.
Complementary MCE measurements reveal a dramatic sample-heating effect exceeding 20 K in pulsed high-field experiments, and also highlight the onset of CF level hybridization above 30 T, which would correlate with the turnaround behavior of the magnetostriction curve.
By varying the magnetic field sweep rates, we distinguish between the fast dynamics of the CF-phonon coupling and the slow dynamics of spin-spin correlations.
To model the CF striction, we developed a comprehensive Hamiltonian based on a point-charge model that includes CF-phonon coupling.
Our numerical calculations based on the mean-field approach successfully account for the experimental magnetostriction data across the entire field range.
This study paves the way for understanding the intricate role of the CF-phonon coupling in the crystal structure of a broad range of 4f-based magnets.
§ ACKNOWLEDGMENTS
This work was partly supported by the JSPS KAKENHI Grants-In-Aid for Scientific Research (No. 20J10988 and No. 24H01633), JST-ASPIRE Program (JPMJAP2317), and JST-MIRAI Program (JPMJMI20A1). N.T. was supported by the Alexander von Homboldt foundation. M.G. was supported by the JSPS through a Grant-in-Aid for JSPS Fellows. We appreciate the stimulating discussions with R. Moessner, R. Klingeler, M. Gingras, T. Fennell, and M. Udagawa.
§ WAVEFORM OF PULSED HIGH MAGNETIC FIELDS
Figure <ref> shows the typical magnetic-field profiles generated by three types of pulsed magnets used in this study.
§ PHONON AND CF-PHONON COUPLING TERMS
In this section, we provide the formulas for the phonon term Ĥ_ ph and the CF-phonon coupling term Ĥ_ cfph in the Hamiltonian of Eq. (<ref>).
Ĥ_ ph is written as follows:
Ĥ_ ph = ∑_i p_i^2/2m_i
+1/2∑_ijk_ij/2|R_ij|^2( U_j R_ij- U_i R_ij)^2.
Here, p_i and m_i denote the momentum and mass, respectively, of the atom at position i.
The displacement vector of the atom at position i is represented by U_i.
The difference between the undisplaced lattice positions is given by R_ij≡ R_j- R_i, where R_j (R_i) represents the position of atom at site j (i).
k_ij is the longitudinal atomic spring constants, which is determined based on the Born-von Karman model, assuming simple distance-dependent elastic constant (see Appendix <ref> for details).
Note that the sum over indices i and j counts each spring twice, thus a factor of 1/2 is included before the sum.
The elastic constants are then used to define the phonon-strain coupling constant G_ mix as follows:
G_ mix^αβγ (i)= -a_0∑_i k_ij/| R_ij|^2 R_ij^α R_ij^β R_ij^γ,
where a_0 is the Bohr radius, and the indices α, β, γ = x, y, z refer to an Euclidean coordinate system parallel to the crystal axes, i.e., x ∥ a, y ∥ b and z ∥ c.
The CF-phonon coupling is expressed by
Ĥ_ cfph = ∑_lm∑_ij(i<j)∇_ U_iB_lm(j) U_iO_lm(Ĵ_j),
where the index i denotes all atomic positions, B_lm denotes the CF parameters, and O_lm represents Stevens Operator equivalents.
The CF-phonon coupling constant G_ cfph can be derived from Eq. (<ref>) by taking the displacement derivative of the CF parameters as
G_ cfph^αβ lm (j)= -1/2∑_i (R_i^β∂ B_lm(j)/∂U_i^α
+R_i^α∂ B_lm(j)/∂U_i^β).
Importantly, G _cfph affects the magnitude of Δ L/L.
We scaled G _cfph obtained from McPhase by a factor of 0.16 to match the calculated Δ L/L with the experimental one.
Although the elastic constants c^αβ also affect the magnitude of Δ L/L, their values were fixed based on prior experimental data, as detailed in Appendix <ref>.
In order to obtain the positional derivative of the exchange interaction J_ ex, the Bethe-Slater curve is employed, as shown in Fig. <ref>:
J_ ex(R)=A[-(R/D)^2+(R/D)^4]exp^-(R/D)^2,
where A and D are set to 0.4 and 3.49, respectively, resulting in J_ ex=0.1 K and dJ_ ex/dr=1.03 K/Å at the nearest neighbour distance R_ nn=3.54 Å <cit.>.
It is important to constrain dJ_ ex/dr to a positive value for in order to explain the positive longitudinal magnetostriction in the low-field regime.
§ ELASTIC CONSTANT
computes the spring constants k_ij in units of N/m based on the Born-von Karman model:
k =25 ×exp[-0.1 (r/a)^2],
where r represents the distance and a denotes the nearest-neighbor bond length.
Based on these spring constants, the elastic constant tensor
c^αβ can be derived.
The resulting tensor closely matches those obtained from sound velocity experiments <cit.> and successfully reproduces both the slope of low-energy acoustic phonons as a function of wave vectors and the highest-energy optical phonon mode, which reaches 105 meV in DFT calculations <cit.>.
Equation (<ref>) produces a table of longitudinal spring constants, which are then converted to an elastic tensor with three independent elements in the units of eV/primitive unit cell: c_11=585.656, c_12=276.629, and c_44=276.629, reflecting the highly symmetric cubic crystal structure.
These values are in good agreement with those obtained from ultrasound measurements <cit.>, after performing a unit conversion of 1 meV/primitive unit cell volume = 0.000621 GPa, where the primitive unit cell volume is 257.835 Å^3 <cit.>.
99
Hathaway_1993 K. B. Hathaway and A. E. Clark, Magnetostrictive Materials, MRS Bulletin 18, 34 (1993).
Yu_2024 Q. Yu, J. Wang, C. Liang, J. Meng, J. Xu, Y. Liu, S. Zhao, X. Xi, C. Xi, M. Yang, C. Si, Y. He, D. Wang, and C. Jiang, A Giant Magneto-Superelasticity of 5% Enabled by Introducing Ordered Dislocations in Ni_34Co_8Cu_8Mn_36Ga_14 Single Crystal, Adv. Sci. 11, 2401234 (2024).
Zhang_2012 H. Zhang, T. Zhang, and C. Jiang, Magnetostrictive actuators with large displacement and fast respons, Smart Mater. Struct. 21 055014 (2012).
Valerio_2019 V. Apicella, C. S. Clemente, D. Davino, D. Leone, and C. Visone, Review of Modeling and Control of Magnetostrictive Actuators, Actuators 8 (2019).
APatri_2020 A. S. Patri, M. Hosoi, S. Lee, and Y. Baek Kim, Theory of magnetostriction for multipolar quantum spin ice in pyrochlore materials, Phys. Rev. Res. 2, 033015 (2020).
NTang_2023 N. Tang, Y. Gritsenko, K. Kimura, S. Bhattacharjee, A. Sakai, M. Fu, H. Takeda, H. Man, K. Sugawara, Y. Matsumoto, Y. Shimura, J. Wen, C. Broholm, H. Sawa, M. Takigawa, T. Sakakibara, S. Zherlitsyn, J. Wosnitza, R. Moessner, and S. Nakatsuji, Spin-orbital liquid state and liquid-gas metamagnetic transition on a pyrochlore lattice, Nat. Phys. 19, 92 (2023).
MJaime_2012 M. Jaime, R. Daou, S. A. Crooker, F. Wicker, A. Uchida, A. E. Feiguin, C. D. Batista, H. A. Dabkowska, and B. D. Gaulin, Magnetostriction and magnetic texture to 100.75 Tesla in frustrated SrCu_2(BO_3)_2, Proc. Natl. Acad. Sci. U.S.A. 109, 12404 (2023).
Nomura_2023 T. Nomura, P. Corboz, A. Miyata, S. Zherlitsyn, Y. Ishii, Y. Kohama, Y. H. Matsuda, A. Ikeda, C. Zhong, H. Kageyama, and F. Mila, Unveiling new quantum phases in the Shastry-Sutherland compound SrCu_2(BO_3)_2 up to the saturation magnetic field, Nat. Commun. 14, 3769 (2023).
YMatsuda_2013 Y. H. Matsuda, N. Abe, S. Takeyama, H. Kageyama, P. Corboz, A. Honecker, S. R. Manmana, G. R. Foltin, K. P. Schmidt, and F. Mila, Magnetization of SrCu_2(BO_3)_2 in Ultrahigh Magnetic Fields up to 118 T, Phys. Rev. Lett. 111, 137204 (2013).
063703 (2022).
MDoerr_2005 M. Doerr, M. Rotter, and A. Lindbaum, Magnetostriction in rare-earth based antiferromagnets, Advances in Physics 54, 1 (2005).
AIkeda_2020 A. Ikeda, Y. H. Matsuda, and K. Sato, Two Spin-State Crystallizations in LaCoO_3, Phys. Rev. Lett. 125, 177202 (2020).
KYoshimura_1988 K. Yoshimura, T. Nitta, M. Mekata, T. Shimizu, T. Sakakibara, T. Goto, and G. Kido, Anomalous high-field magnetization and negative forced volume magnetostriction in Yb_1-xM_xCu_2 (M=In and Ag)–evidence for valence change in high magnetic fields, Phys. Rev. Lett. 60, 851 (1988).
Mus_2004 N. V. Mushnikov and T. Goto, High-field magnetostriction of the valence-fluctuating compound YbInCu_4, Phys. Rev. B 70, 054411 (2004).
AMiyake_2022 A. Miyake, M. Gen, A. Ikeda, K. Miyake, Y. Shimizu, Y. J. Sato, D. Li, A. Nakamura, Y. Homma, F. Honda, J. Flouquet, M. Tokunaga, and D. Aoki, Magnetovolume Effect on the First-Order Metamagnetic Transition in UTe_2, J. Phys. Soc. Jpn. 91, 063703 (2022).
Zapf_2008V. S. Zapf, V. F. Correa, P. Sengupta, C. D. Batista, M. Tsukamoto, N. Kawashima, P. Egan, C. Pantea, A. Migliori, J. B. Betts, M. Jaime, and A. Paduan-Filho, Direct measurement of spin correlations using magnetostriction, Phys. Rev. B 77, 020404(R) (2008).
AIkeda_2019 A. Ikeda, S. Furukawa, O. Janson, Y. H Matsuda, S. Takeyama, T. Yajima, Z. Hiroi, H. Ishikawa, Magnetoelastic couplings in the deformed kagome quantum spin lattice of volborthite , Phys. Rev. B 99, 140412(R) (2019).
AMiyata_2021 A. Miyata, T. Hikihara, S. Furukawa, R. K. Kremer, S. Zherlitsyn, and J. Wosnitza, Magnetoelastic study on the frustrated quasi-one-dimensional spin-1/2 magnet LiCuVO_4, Phys. Rev. B 103, 014411 (2021).
Patri_2019 A. S. Patri, A. Sakai, S. Lee, A. Paramekanti, S. Nakatsuji, and Y. B. Kim, Unveiling hidden multipolar orders with magnetostriction, Nat. Commun. 10, 4092 (2019).
Kimura_2014 S. Kimura, M. Hagiwara, T. Takeuchi, H. Yamaguchi, H. Ueda, and K Kindo, Exchange Interactions of the Chromium Spinel Oxide HgCr_2O_4 in High Magnetic Fields Examined by the Magnetoelastic Theory, J. Phys. Soc. Jpn. 83, 113709 (2014).
AMiyata_2020 A. Miyata, H. Suwa, T. Nomura, L. Prodan, V. Felea, Y. Skourski, J. Deisenhofer, H.-A. Krug von Nidda, O. Portugall, S. Zherlitsyn, V. Tsurkan, J. Wosnitza, and A. Loidl, Spin-lattice coupling in a ferrimagnetic spinel: Exotic H–T phase diagram of MnCr_2S_4 up to 110 T, Phys. Rev. B 101, 054432 (2020).
Harris_1997 M. J. Harris, S. T. Bramwell, D. F. McMorrow, T. Zeiske, and K. W. Godfrey, Geometrical Frustration in the Ferromagnetic Pyrochlore Ho_2Ti_2O_7, Phys. Rev. Lett. 79, 2554 (1997).
ROsenkranz_2000 S. Rosenkranz, A. P. Ramirez, A. Hayashi, R. J. Cava, R. Siddharthan, and B. S. Shastry, Crystal-field interaction in the pyrochlore magnet Ho_2Ti_2O_7, J. Appl. Phys. 87, 5914 (2000).
STBramwell_2001 S. T. Bramwell and M. J. P. Gingras, Spin Ice State in Frustrated Magnetic Pyrochlore Materials, Science 294, 1495 (2001).
Petrenko_2003
O. A. Petrenko, M. R. Lees, and G. Balakrishnan, Magnetization process in the spin-ice compound Ho_2Ti_2O_7, Phys. Rev. B 68, 012406 (2003).
Fennell_2009 T. Fennell, P. P. Deen, A. R. Wildes, K. Schmalzl, D. Prabhakaran, A. T. Boothroyd, R. J. Aldus, D. F. McMorrow, and S. T. Bramwell, Magnetic Coulomb Phase in the Spin Ice Ho_2Ti_2O_7, Science 326, 415 (2009).
YNakanishi_2011 Y. Nakanishi, T. Kumagai, M. Yoshizawa, K. Matsuhira, S. Takagi, and Z. Hiroi, Elastic properties of the rare-earth dititanates R_2Ti_2O_7 (R = Tb, Dy, and Ho), Phys. Rev. B 83, 184434 (2011).
CCastelnovo_2012 C. Castelnovo, R. Moessner, and S.L. Sondhi, Spin Ice, Fractionalization, and Topological Order, Annu. Rev. Condens. Matter Phys. 3, 35 (2012).
SErfanifam_2014 S. Erfanifam, S. Zherlitsyn, S. Yasin, Y. Skourski, J. Wosnitza, A. A. Zvyagin, P. McClarty, R. Moessner, G. Balakrishnan, and O. A. Petrenko, Ultrasonic investigations of the spin ices Dy_2Ti_2O_7 and Ho_2Ti_2O_7 in and out of equilibrium, Phys. Rev. B 90, 064409 (2014).
MRuminyDFT_2016 M. Ruminy, M. N. Valdez, B. Wehinger, A. Bosak, D. T. Adroja, U. Stuhr, K. Iida, K. Kamazawa, E. Pomjakushina, D. Prabakharan, M. K. Haas, L. Bovo, D. Sheptyakov, A. Cervellino, R. J. Cava, M. Kenzelmann, N. A. Spaldin, and T. Fennell, First-principles calculation and experimental investigation of lattice dynamics in the rare-earth pyrochlores R_2Ti_2O_7 (R = Tb, Dy, Ho), Phys. Rev. B93, 214308 (2016).
MRuminy_2016_CF M. Ruminy, E. Pomjakushina, K. Iida, K. Kamazawa, D. T. Adroja, U. Stuhr, and T. Fennell, Crystal-field parameters of the rare-earth pyrochlores R_2Ti_2O_7 (R = Tb, Dy, and Ho), Phys. Rev. B 94, 024430 (2016).
LOpherden_2019 L. Opherden, T. Herrmannsdörfer, M. Uhlarz, D. I. Gorbunov, A. Miyata, O. Portugall, I. Ishii, T. Suzuki, H. Kaneko, H. Suzuki, and J. Wosnitza, Magnetization beyond the Ising limit of Ho_2Ti_2O_7, Phys. Rev. B 99, 085132 (2019).
Wang_2021 Y. Wang, T. Reeder, Y. Karaki, J. Kindervater, T. Halloran, N. Maliszewskyj, Y. Qiu, J. A. Rodriguez, S. Gladchenko, S. M. Koohpayeh, S. Nakatsuji, and C. Broholm, Monopolar and dipolar relaxation in spin ice Ho_2Ti_2O_7, Sci. Adv. 7, eabg0908 (2021).
MGen_2020 M. Gen, Y. Okamoto, M. Mori, K. Takenaka, and Y. Kohama, Magnetization process of the breathing pyrochlore magnet CuInCr_4S_8 in ultrahigh magnetic fields up to 150 T, Phys. Rev. B 101, 054434 (2020).
MGen_2022 M. Gen, A. Miyake, H. Yagiuchi, Y. Watanabe, A. Ikeda, Y. H. Matsuda, M. Tokunaga, T. Arima, and Y. Tokunaga, Enhancement of giant magnetoelectric effect in Ni-doped CaBaCo_4O_7, Phys. Rev. B 105, 214412 (2022).
AIkeda_2017 A. Ikeda, T. Nomura, Y. H. Matsuda, S. Tani, Y. Kobayashi, H. Watanabe, and K. Sato, High-speed 100 MHz strain monitor using fiber Bragg grating and optical filter for magnetostriction measurements under ultrahigh magnetic fields, Rev. Sci. Instrum. 88, 083906 (2017).
2013_Kih T. Kihara, Y. Kohama, Y. Hashimoto. S. Katsumoto, and M. Tokunaga, Adiabatic measurements of magneto-caloric effects in pulsed magnetic fields up to 55 T, Rev. Sci. Instrum. 84, 074901 (2013).
notes For the exact diagonalization of Eq. (<ref>), we employed local coordinates using a Mathematica code. The corresponding CF parameters were taken from Ref. <cit.>. For the numerical simulations on Eq. (<ref>), employs global coordinates to obtain the CF level scheme, where the application of a rotation matrix is necessary.
MRotter_2002 M. Rotter, M. Doerr, M. Loewenhaupt, and P. Svoboda, Modeling magnetostriction in compounds using McPhase, J. Appl. Phys. 91, 8885 (2002).
MRotter_2004 M. Rotter, Using McPhase to calculate magnetic phase diagrams of rare earth compounds, J. Magn. Magn. Mater. 272-276, e481 (2004).
MRotter_2012 M. Rotter, M. D. Le, A. T. Boothroyd, and J. A. Blanco, Dynamical matrix diagonalization for the calculation of dispersive excitations, J. Phys.: Condens. Matter 24, 213201 (2012).
MRotter_2007 M. Rotter, A. Barcza, M. Doerr, M. D. Le, J. Brooks, E. Jobiliong, and J. Perenboom, Spin-flop transition in samarium metal investigated by capacitance dilatometry in a steady magnetic field of 45 T, Phys. Rev. B 76, 144421 (2007).
JJensen_2007 J. Jensen, Static and dynamic Jahn-Teller effects and antiferromagnetic order in PrO_2: A mean-field analysis, Phys. Rev. B 76, 144428 (2007).
TStoter_2020 T. Stöter, M. Doerr, S. Granovsky, M. Rotter, S. T. B. Goennenwein, S. Zherlitsyn, O. A. Petrenko, G. Balakrishnan, H. D. Zhou, and J. Wosnitza, Extremely slow nonequilibrium monopole dynamics in classical spin ice, Phys. Rev. B 101, 224416 (2020).
Baroudi_2015 K. Baroudi, B. D. Gaulin, S. H. Lapidus, J. Gaudet, and R. J. Cava, Symmetry and light stuffing of Ho_2Ti_2O_7, Er_2Ti_2O_7, and Yb_2Ti_2O_7 characterized by synchrotron x-ray diffraction, Phys. Rev. B 92, 024110 (2015).
Kittaka_2018 S. Kittaka, S. Nakamura, H. Kadowaki, H. Takatsu, and T. Sakakibara, Field-rotational Magnetocaloric Effect: A New Experimental Technique for Accurate Measurement of the Anisotropic Magnetic Entropy, J. Phys. Soc. Jon. 87, 073601 (2018).
GEhlers_2003 G. Ehlers, A. L. Cornelius, M. Orendác, M. Kajnaková, T. Fennell, S. T. Bramwell, and J. S. Gardner, Dynamical crossover in `hot' spin ice, J. Phys.: Condens. Matter 15, L9 (2003).
BTomasello_2019 B. Tomasello, C. Castelnovo, R. Moessner, and J. Quintanilla, Correlated Quantum Tunneling of Monopoles in Spin Ice, Phys. Rev. Lett. 123, 067204 (2019).
HDZhou_2008 H. D. Zhou, C. R. Wiebe, J. A. Janik, L. Balicas, Y. J. Yo, Y. Qiu, J. R. D. Copley, and J. S. Gardner, Dynamic Spin Ice: Pr_2Sn_2O_7, Phys. Rev. Lett. 101, 227204 (2008).
Machida_2010 Y. Machida, S. Nakatsuji, S. Onoda, T. Tayama, and T. Sakakibara, Time-reversal symmetry breaking and spontaneous Hall effect without magnetic dipole order, Nature 463, 210 (2010).
KKimura_2013 K. Kimura, S. Nakatsuji, J. Wen, C. Broholm, M. Stone, E. Nishibori, and H. Sawa, Quantum fluctuations in spin-ice-like Pr_2Zr_2O_7, Nat. Commun. 4, 1934 (2013).
Petit_2016 S. Petit, E. Lhotel, S. Guitteny, O. Florea, J. Robert, P. Bonville, I. Mirebeau, J. Ollivier, H. Mutka, E. Ressouche, C. Decorse, M. Ciomaga Hatnean, and G. Balakrishnan, Antiferroquadrupolar correlations in the quantum spin ice candidate Pr_2Zr_2O_7, Phys. Rev. B 94, 165153 (2016).
Sibille_2018 R. Sibille, N. Gauthier, H. Yan, M. Ciomaga Hatnean, J. Ollivier, B. Winn, U. Filges, G. Balakrishnan, M. Kenzelmann, N. Shannon, and T. Fennell, Experimental signatures of emergent quantum electrodynamics in Pr_2Hf_2O_7, Nat. Phys. 14, 711 (2018).
|
http://arxiv.org/abs/2409.03488v1 | 20240905125637 | Head-First Memory Allocation on Best-Fit with Space-Fitting | [
"Adam Noto Hakarsa"
] | cs.OS | [
"cs.OS",
"D.4.2"
] |
Harvard University
Cambridge
MA
USA
[email protected]
§ ABSTRACT
Although best-fit is known to be slow, it excels at optimizing memory space utilization. Interestingly, by keeping the free memory region at the top of the memory, the process of memory allocation and deallocation becomes approximately 34.86% faster while also maintaining external fragmentation at minimum.
Head-First Memory Allocation on Best-Fit with Space-Fitting
Adam Noto Hakarsa
September 9, 2024
===========================================================
§ INTRODUCTION
Memory management is a fundamental part of any robust operating system <cit.>. Architecturally, this is because CPU must cooperate with the memory <cit.>, which is currently the fastest and cheapest storage medium next to the register. Even if such architectural limitations were absent, memory management would still be essential because it is generally impractical for any program to accurately estimate its memory needs in advance, hence the need for dynamic memory allocation. Speed, too, matters as 30-60% of programs spend their execution time in allocating dynamic memory <cit.>. The fact that main memory is limited in size further underscores the importance of memory management.
From an OS perspective, memory management typically involves a linked list to keep track of memory usage. Several dynamic memory management algorithms, simply referred to as allocator, operate on this large chunk of space, among which the most commonly reviewed are first-fit, next-fit, best-fit, and quick-fit.
The first-fit algorithm scans the memory from the beginning until it finds the first free segment large enough for the request <cit.>. The next-fit algorithm operates similarly, except it starts scanning from the point where it stopped last time <cit.>. In contrast, the best-fit algorithm searches the entire list to find the smallest free segment that meets the request <cit.>. Alternatively, the quick-fit algorithm maintains lists of memory segments of specific sizes and, upon receiving an allocation request, searches the list of segments closest in size to the requested one <cit.>.
The choice of an allocation algorithm is a compromise between efficient use of memory and low allocation overhead <cit.>. This is why first-fit and best-fit are popular, especially since it does not require computing statistical distributions or maintaining an extraneous data structure which requires an additional time and space.
§ BACKGROUND
Due to its simplicity, the first-fit and next-fit algorithms may result in memory waste through internal fragmentation, which occurs when the allocated block is larger than the requested size, leaving some space within the block unused. Consequently, best-fit or quick-fit algorithms are often preferred because they aim to allocate the smallest possible block. However, these algorithms still suffer from external fragmentation. This type of fragmentation can prevent the operating system from allocating memory even if sufficient free space exists. Techniques such as compaction, coalescing, segmentation, and paging attempt to address this issue. Despite this, best-fit is effective in optimizing the use of limited memory space. Therefore, we aim to explore a simple technique to expedite the best-fit algorithm.
§ ALGORITHM
Our allocator does not have a minimum allocation size, although blocks must always be located at addresses that are multiples of eight (double word) to ensure compatibility with systems such as Sun workstations <cit.>. Each allocated memory block includes a bookkeeping structure that records essential data. We have minimized the size of this bookkeeping structure to 16KB, storing only key information: whether the block is free, the block’s owner process ID, the block’s addressable space size, and a link to the previous block in the chain. This link is necessary because, although we can move forward using pointer arithmetic, we cannot move backward since we do not know the size of the block to the left.
It is important to note that the best-fit algorithm alone can lead to increased external fragmentation. To address this issue, we employ functions such as and , which we will discuss in detail later. The allocation process is managed by a function called . We have observed that a small change in the implementation can significantly speed up the memory allocation process, which we will demonstrate later.
§.§ Allocation
The process to assign an area in memory to a program is called (storage) allocation <cit.>. Such a process may fail for reasons such as the lack of free block to accommodate the request.
The two algorithms are evidently very similar to each other, except that in the head-first algorithm, we do not call , unlike in Algorithm <ref> at line 10. Additionally, the function (or macro) ensures that memory blocks are aligned on a double-word boundary.
The algorithm simply partitions a block into 2 smaller block, as long as the partition results in a usable memory block that can fit the initial request.
With or without , we employ space-fitting to reduce external fragmentation by calling . This function calculates the extra, redundant bytes and then transfers them to any possible adjacent block or carves a new one if possible.
The space-fitting process operates as follows: after identifying a block that is significantly larger than required, any extra bytes are transferred to the right-hand block if it is free. If only the left-hand block is free, the extra bytes are transferred there. In the rare case where neither block is free, the block will divide itself as long as no resulting block has zero addressable space. If none of these options are viable, the block remains as-is.
Lastly, is a simple function that attempts to coalesce free blocks from the bottom to the top. This process can result in a larger block by combining several free blocks. Without coalescing, it is possible that a user might request memory that no single block can serve unless some blocks are stitched together.
§.§ Deallocation
The function as demonstrated by Algorithm <ref> returns a status indicating whether the block is freed (), un-freed because it wasn't allocated to begin with (), or un-freed because the block is owned by another process (). It accepts which points to a region of memory previously allocated by the function.
§ SIMULATION
When the memory is initialized, its underlying linked list will be laid out in the following manner:
The position, denoted as starts counting from zero. The represents the memory address accessible by the user. While accounts for the bookkeeping struct, the does not; thus, it refers to the addressable allocated memory that can be read, written, and freed. The indicates the memory block to its left-hand side in the chain. The field indicates whether a block is currently reserved or not. The field reports the size of the addressable bytes. When aggregating the , the sum will be smaller than the total free memory space in the kernel-fresh state due to overhead from the bookkeeping structs created for each memory block.
It is easy to distinguish head-first from otherwise the non head-first allocation. In the head-first implementation, the unallocated region of the memory can be seen at the top as evident from table <ref>.
On a non head-first implementation, the unallocated region is at the bottom of the list, as evident from table <ref>.
If we want to allocate 8 bytes of memory using the best-fit strategy, we would scan the linked list to find the smallest block that can accommodate at least 8 bytes. In a non head-first approach, we would split the block located at position 48 to create the required allocation.
However, on a head-first implementation, we don't need to traverse the list. Since the unallocated memory is at the top, we can simply request a new block that immediately fits the request, as evident from table <ref>.
In both implementations, a block will be merged with its right-hand or left-hand buddy whenever possible to minimizes external fragmentation. Therefore, according to Table <ref>, freeing the 32-byte block results in a larger block of size 128 bytes. The size is 128 bytes instead of 112 bytes because we only need one overhead struct for each memory block. Hence, any redundant bookkeeping structs get dissolved to be a part of the addressable space.
§ BENCHMARK TEST
Our benchmark test suite aims to execute rounds of memory allocation and deallocation requests, with each allocation not exceeding 1,024 bytes. Each request is handled by a separate thread to simulate multiprocessing scenarios. We randomize both the number of bytes to allocate, and whether to allocate or deallocate at any given time. Consequently, each trial may result in a different state of the linked list, while the total CPU time remains quite consistent across different trials. It is noteworthy that the number of allocation and deallocation requests are pretty well balanced.
We record the results of executing the non head-first best-fit algorithm with space-fitting in Table <ref>. It illustrates the number of requests performed, the execution time, the percentage of successful memory allocations and deallocations, and the total external fragmentation in bytes. The entire memory is initialized to a size of 16 megabytes.
Table <ref> illustrates the experiment on head-first best-fit with space-fitting. In addition, it shows the improvement of execution time in percentage over the experiment illustrated by Table <ref>.
Demonstrably, the same best-fit mechanism produces different results under different operation modes, namely head-first and non head-first. We observe a significant improvement in execution time with the head-first mechanism, while also maintaining, if not improving, algorithm effectiveness.
§ FUTURE WORKS
We compare head-first versus non head-first specifically for the best-fit algorithm. We can investigate whether similar benefits apply to other memory allocation algorithms such as first-fit, next-fit, worst-fit, as well as other algorithms like fast-fits <cit.> and half-fit <cit.>. Additionally, benchmarking on real-world examples, as demonstrated in <cit.>, can provide further insights and practical applicability.
§ CONCLUSION
We compared two best-fit implementations that are only slightly different from one another. Our benchmark has shown that operating in head-first mode, where the free unallocated region is kept near the head of the memory, speeds up best-fit operations.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.02856v1 | 20240904162925 | Building a Scalable, Effective, and Steerable Search and Ranking Platform | [
"Marjan Celikik",
"Jacek Wasilewski",
"Ana Peleteiro Ramallo",
"Alexey Kurennoy",
"Evgeny Labzin",
"Danilo Ascione",
"Tural Gurbanov",
"Géraud Le Falher",
"Andrii Dzhoha",
"Ian Harris"
] | cs.IR | [
"cs.IR",
"cs.LG"
] |
[email protected]
Zalando SE
Berlin
Germany
[email protected]
Zalando SE
Berlin
Germany
[email protected]
Zalando SE
Berlin
Germany
[email protected]
Zalando SE
Berlin
Germany
[email protected]
Zalando SE
Berlin
Germany
[email protected]
Zalando SE
Berlin
Germany
[email protected]
Zalando SE
Berlin
Germany
[email protected]
Zalando SE
Berlin
Germany
[email protected]
Zalando SE
Berlin
Germany
[email protected]
Zalando SE
Berlin
Germany
§ ABSTRACT
Modern e-commerce platforms offer vast product selections, making it difficult for customers to find items that they like and that are relevant to their current session intent. This is why it is key for e-commerce platforms to have near real-time scalable and adaptable personalized ranking and search systems. While numerous methods exist in the scientific literature for building such systems, many are unsuitable for large-scale industrial use due to complexity and performance limitations. Consequently, industrial ranking systems often resort to computationally efficient yet simplistic retrieval or candidate generation approaches, which overlook near real-time and heterogeneous customer signals, which results in a less personalized and relevant experience. Moreover, related customer experiences are served by completely different systems, which increases complexity, maintenance, and inconsistent experiences.
In this paper, we present a personalized, adaptable near real-time ranking platform that is reusable across various use cases, such as browsing and search, and that is able to cater to millions of items and customers under heavy load (thousands of requests per second). We employ transformer-based models through different ranking layers which can learn complex behavior patterns directly from customer action sequences while being able to incorporate temporal (e.g. in-session) and contextual information. We validate our system through a series of comprehensive offline and online real-world experiments at a large online e-commerce platform, and we demonstrate its superiority when compared to existing systems, both in terms of customer experience as well as in net revenue. Finally, we share the lessons learned from building a comprehensive, modern ranking platform for use in a large-scale e-commerce environment.
[500]Computing methodologies Neural networks
[500]Information systems Recommender systems
Building a Scalable, Effective, and Steerable Search and Ranking Platform
Ian Harris
September 9, 2024
=========================================================================
§ INTRODUCTION
With the vast choice of items available in e-commerce, finding relevant content has become increasingly challenging. This is why personalization is crucial in showcasing products that align with customers' preferences and session intent. Consequently, large e-commerce companies such as Zalando, one of Europe's largest online fashion e-commerce platforms, are heavily invested in the development of advanced ranking systems that can more effectively cater to customer needs and tastes.
In major e-commerce platforms, deploying larger and more powerful models poses challenges due to the complexities involved in handling high traffic loads in production as these systems must be capable of serving thousands of requests per second across millions of items and customers. Furthermore, browsing and searching the catalog (refer to <ref>) represent primary methods through which customers discover products, whether for immediate purchase or inspiration. However, distinct yet related customer experiences, such as search and browse functions, are often powered by entirely separate systems <cit.>. This separation increases modeling complexity, increases maintenance costs, and may result in inconsistent customer experiences.
The ability to provide a personalized, dynamic, scalable, and efficient ranking platform that can be employed across various experiences has become critical to driving customer engagement and business value in e-commerce. We achieve this by building a ranking platform grounded in four key design principles: 1) composability - our platform consists of multiple state-of-the-art ranking models and candidate generators working in orchestration; 2) scalability - ensured by vector-based indexing and scoring; 3) shared real-time serving infrastructure and 4) steerable ranking that can adapt to varying customer preferences and business objectives.
Our platform is able to support the integration of multiple models through vertical layering, and horizontal integration, blending the outputs of various models or other candidate sources. This enables scalability, independence, and ability to mix various content types to cater to specific use cases as well as building ranking ensembles by combining the outputs of multiple models. The platform's scalability is driven by employing a vector store in its candidate generation stage, facilitating efficient indexing, scoring, and retrieval which are crucial for managing a growing catalog of items and number of customers. To this end, we compute dense representations of customer behaviors, contexts, and item inputs in a common embedding space. Built on a foundation that allows for near real-time scoring and computation of customer and item representations, our platform dynamically adapts the ranking in all layers to recent customer changes (including candidate generation) which increases the probability of discovering relevant items <cit.>. Moreover, utilizing efficient transformer-based model architectures across all layers allows sharing of the serving infrastructure, which in turn reduces engineering complexity, increases re-usability, and helps avoid inconsistency between training and serving phases, which is a common issue in machine learning engineering systems <cit.>.
Many works both from industry and academia do not try to capture the entire customer journey and side information such as item metadata, customer profile, and contextual inputs <cit.>. However, it is known that deep learning recommender systems live up to their full potential only when numerous features of heterogeneous types are included <cit.>. We demonstrate that incorporating heterogeneous inputs that capture the full spectrum of the customer journey (customer behavior, content-based data, local and global contextual and temporal information) is crucial for ranking quality and even diversity. All of this helps provide contextually relevant results for both in-session browsing, where the customer is actively engaged in shopping, and cross-session scenarios, where the customer returns to the platform after a break with a potentially new shopping intent. To ensure a more streamlined and effective data integration process into the ranking models, contrary to common approaches <cit.> that employ additional architectures, our approach utilizes the same self-attention mechanism to efficiently fuse all input data types.
Many ranking systems in the literature rely on pre-trained items and customer embeddings. Our experiments reveal that similarly to NLP tasks <cit.>, the effectiveness of our models significantly increases when the pre-trained input item embeddings are further fine-tuned on the ranking task. Notably, we show that if these embeddings are not continuously trained, the candidate generation model shows substantially less customer engagement. To address the item cold-start problem, we introduce epsilon-greedy exploration by blending fresh items from additional candidate sources into the organic ranking. We address the customer cold-start by leveraging customer context and in-session capabilities and data.
The key contributions of our work are as follows:
* We present a comprehensive, flexible, scalable ranking platform able to provide near real-time inference in all ranking layers in high-load systems, building on state-of-the-art models and standard design patterns that can be applied in various search and ranking use cases;
* We propose novel modifications of existing state-of-the-art ranking model architectures allowing more efficiency without loss of quality. With this we demonstrate that sequence-based models can successfully replace traditional ranking systems in all ranking phases and significantly improve performance;
* We present extensive experimentation, including both online and offline. We demonstrate that our proposed system not only significantly outperforms existing solutions by a wide margin (10-40% improvement in offline evaluation metrics, 15% combined engagement uplift, and +2.2% combined net revenue in 4 online A/B tests) but that it also scales effectively under heavy load.
It's crucial to note that although the experimental results presented are specific to the e-commerce sector, the methodologies, algorithms, and infrastructure discussed are designed for adaptability and can be extended to domains beyond e-commerce. The system has been deployed and operational for the last 12 months in one of the largest e-commerce platforms in Europe. It has successfully replaced numerous legacy systems and it is serving millions of customers per day and handling thousands of RPS.
The remainder of this paper is organized as follows: Related work is reviewed in <ref>. Details on the overall system architecture and design decisions are elaborated in <ref>. Sections <ref>, <ref> and <ref> describe the candidate generation, ranking and policy layers. The experimental results (both offline and online) are presented in <ref>. Finally, we present the conclusions in <ref>.
§ RELATED WORK
Thanks to their advantages over traditional deep-learning-based models, sequence-based recommender systems <cit.> in their two flavors of language modeling (CLM and MLM) <cit.> have gained wide traction. These systems have proven powerful in modeling customer behavior as a sequence of actions due to their capability to 1) capture both short-term and long-term interests <cit.>; and 2) the ability to compute complex feature interactions. However, most existing works use public datasets that in certain cases are not even adequate for sequential recommendation tasks <cit.> and many of focus only on offline experiments, with only a few works reporting actual customer impact through end-to-end A/B testing in large scale environments <cit.>. Our paper extends this line of work and demonstrates the usefulness of these models in real-world applications that include personalized item browsing and search. Moreover, to the best of our knowledge, this is the first work to apply transformer networks as part of the two-tower architecture for learning embeddings for ranking use cases that were originally introduced in the line of work of Google <cit.>. Unlike related approaches <cit.>, our approach to scoring candidate items does not suffer from performance issues caused by a long sequence length <cit.> or limiting the input embedding size due to concatenating the candidate item embeddings with the action sequence embeddings. Unlike <cit.>, we do not observe a significant drop in ranking performance and feed only the average embedding of the candidates as a fixed input position into the transformer network.
Moreover, only a few published studies <cit.> include heterogeneous inputs such as context and content-based features, which help address the cold-start problem and address data sparsity by improving generalization. For example, <cit.> considers customer profiles and contextual features by using wide & deep learning (WDL), which relies on a concatenation of signals in the output of the network, making it inadequate to capture powerful feature interactions. <cit.> employs deep and cross-network (DCN) on top to explicitly model feature-crosses, which significantly increases the number of parameters of the network.
§ SYSTEM DESIGN
Designing a highly performant, scalable, and steerable ranking platform entails multiple challenges and complex choices. The existing literature often focuses only on subsets of them <cit.>, while this section aims to navigate through them holistically. We describe foundational design principles and provide an overview of our system architecture and components (see <ref>). The presented design is generalizable and applicable to other retrieval and ranking use cases and setups.
Composability and orchestration of multiple models. Our platform supports the integration of various models either "vertically" for layered ranking and retrieval, or "horizontally" by blending outputs from different models or candidate generators. It features a multi-layered architecture that enhances scalability and reduces inter-dependencies. The initial layer retrieves relevant candidates from multiple candidate generators, possibly generating different item or content types for use cases such as feeds. Each candidate generator typically entails a (lightweight) ranking model. Subsequent layers refine these selections; the ranking layer applies heavy personalized models for ranking pre-selected items of possibly different types. The policy layer ensures compliance with business or product specifications. Blending strategies mix outputs to suit specific needs, such as combining different content types in desired proportions (e.g. product, outfits, videos) or balancing popular, fresh, and personalized content <cit.>. Model blending is also the blueprint for building ranking ensembles, where outputs of multiple models are combined either by score weighting or meta rankers.
Scalable platform. The multi-layered model architecture also allows us to achieve high scalability. The more accurate, but computationally heavier, part of the ranking is performed on later layers only on a small subset of candidates obtained from the less accurate but more computationally efficient candidate generator layers. Thus, the highest scalability requirements are placed on the candidate generator. Scalability of the candidate generator layer is achieved through a vector store allowing efficient indexing, scoring, and personalized retrieval capabilities to scale for a large and growing item catalog and customer base. Our platform includes infrastructure to compute customer vectors based on past actions and context (such as country, browsing category or search query, etc.) and item vectors. The item vectors are indexed in an internally hosted vector store that supports efficient approximate nearest neighbors search and retrieval.
Shared near real-time serving infrastructure. Similarities in model types across layers allow for shared training datasets and much of the serving infrastructure, enhancing efficiency. Deep learning minimizes feature engineering complexity by integrating embedding mappings directly into the model graph, ensuring consistency between training and serving phases. Shared online feature stores across ranking layers enable effective input caching for each request, simplifying engineering efforts.
Steerable ranking. Our system's flexibility allows for external adjustments to ranking objectives to align with business goals like customer satisfaction or profitability. It also supports diverse content types through its candidate generators and mixing components and integrates business heuristics in the policy layer.
In the following sections, we present in more detail the main components of our multi-layered system, i.e., the candidate generator layer, the ranking layer, and the policy layer.
§ CANDIDATE GENERATION
The objective of our candidate generation layer is to generate personalized item candidates from the item catalog for each individual customer efficiently in near real-time. According to our findings, a personalized candidate generator is essential for the performance of the overall ranking system (see <ref> for details).
We follow the classical two-tower approach <cit.>, where the customer tower processes historical customer action sequences and contextual data to generate a customer embedding while the item tower is responsible for generating item embeddings. These embeddings are then combined by using dot product to generate a score per item as shown in <ref> (theoretical justification about the expressiveness of the two-tower model is provided in the Appendix, <ref>).
Although trained together, the towers are deployed and operated independently. The item tower generates item embeddings that are indexed in a vector store for efficient ranking and retrieval. The customer tower is invoked to generate customer embeddings each time a customer accesses the platform. The freshly computed customer embeddings are then used to find items with similar embeddings in the nearest-neighbors index.
We formulate the retrieval task as an extreme multi-class classification problem <cit.> with softmax optimization. Every item in the vocabulary represents a distinct class, and the goal is to accurately predict the class of the next item a customer will interact with. We employ sampled softmax loss with log-uniform sampling, with negative classes that correspond to 0.42% of the total number of classes. This loss outperformed other loss functions and negative sampling strategies. Specifically, we experimented with “generalized” binary cross entropy <cit.> and popularity sampling with varying numbers of negatives as well as sampling hard negatives from the category of items the customer was browsing before acting.
To compute a customer embedding from the customer action sequence and the context, we employ a transformer encoder and use causal language modeling (CLM) as in <cit.>, processing each customer sequence once per epoch. We predict the subsequent item in the sequence while preventing backward attention by using a causal mask.
For simplicity, items in the item tower in <ref> are represented by using a single embedding that jointly encodes product metadata (brand, category, material, etc.) and visual cues. It is worth mentioning that in this common setting, trainable input embeddings on the ranking task performed substantially better in both offline and online tests. Details on how different types of input signals from the customer journey data (e.g. contextual and customer action sequence data) are encoded in the customer tower of the model are captured in Section <ref>.
§ RANKING LAYER
The objective of the ranking layer is to rank items returned by the candidate generation phase by their relevance to the customer by using a powerful ranking model. We model this task as a pointwise multi-task prediction problem, where we predict the probability of the customer performing any of the following positive actions on a candidate item: click, add-to-wishlist, add-to-cart, purchase given the context and their past behavioral data. If a candidate item is associated with any of these positive actions, we consider it a positive item, otherwise, it is considered a negative.
§.§ Model Architecture
<ref> depicts the architecture of the model in the ranking layer. It consists of four main parts: embedding layer, item candidate embedding, customer and context embedding computed via a self-attention mechanism, a prediction head for each of the target action types, and a shallow position branch used for position debiasing.
For each of the target actions, we define a prediction head, which takes customer and candidate item representations as inputs. The score, for a given target action, is obtained by computing a dot product between the customer and context embedding and all candidate item embeddings in parallel, after passing them through a FFN. A sigmoid function is used to normalize the score and interpret it as a probability. During training, each prediction head contributes equally to the loss, while at serving we produce the final ranking by weighting the scores of each prediction head. The weights are dynamically configurable and determined analytically depending on the customer touch point.
While models based on list-wise loss directly optimize the ranking objective, the downside is that the predictions from such models do not inherently correspond to probabilities. This lack of calibrated probabilities complicates the multi-objective optimization required for business steering. To this end, we adopt a pointwise loss in our multi-task learning setup. Our underlying assumption is that the tasks share a common internal representation, thereby improving generalization performance through the transfer of knowledge. We employ a cross-entropy loss function utilizing binary relevance labels defined as follows:
ℒ = - 1/N∑_n=1^N∑_h=1^H(
y_n^h log(f^h(x_n)) + (1 - y_n^h)(log(1 - f^h(x_n)))
),
where N is the number of training examples, H is the number of heads (tasks), x_n input for training example n, y_n^h is the target label {0, 1} for training example x_n for task h and f^h(x_n) is the output probability of head h.
The customer representation vector is generated by encoding and passing the context and the customer sequence actions through a standard transformer encoder with a look-ahead mask. For efficiency, we use only the output of the last position as the customer and context embedding and pass it to the prediction head. The positional encoding is omitted as it has not been proven effective in this as well as in other works <cit.>.
Unlike <cit.>, we opt out from concatenating the candidate item embeddings as separate positions in the encoder’s input since we have found this to be a limiting factor of the model's scalability, both during training and serving. This is because the number of candidate items can be typically in the order of many hundreds to thousands for a single request. We instead include an average embedding from all candidate embeddings as a single position in the encoder. Both approaches performed similarly well in our setting in terms of ranking quality.
The training objective of our main candidate generator is based on predicting the next customer action, however, our data pipelines are configurable and can be easily extended to predict actions in a longer future time window, e.g. a week, to balance long and short-term customer preferences and improve diversity in outputs.
§.§ Encoding of the Customer Journey Data
In this section, we elaborate on how we encode holistic customer journey data into models in both layers. This consists of (1) behavioral data which includes customer action sequences, item and action metadata, and temporal data, and global and local contextual data. We argue and demonstrate in our experiments that providing complete and heterogeneous information to the model is crucial to predictions that are personalized and contextually relevant.
Behavioural data. As already mentioned, customer action sequences are encoded using transformer encoders as these have been proven more effective than other approaches <cit.>. Each action in an action sequence is represented by item embedding that encodes domain-specific visual information, categorical item metadata, action type, and timestamp embeddings. The categorical item metadata consists of relevant attributes such as brand, color, pattern, category, material, etc. The timestamp embedding encodes quantized timestamps as measured from the beginning of the model training. Temporal data is crucial for modeling customer behavior across sessions. Since customer intent can vary drastically across different sessions, modeling action sequences while ignoring this structure affects performance negatively <cit.>. All inputs are passed through trainable embedding to project discrete or bucketed values into low-dimensional spaces. Item and action-specific embeddings are concatenated.
Contextual data. The contextual information is divided into global and local contexts. Global context includes information such as the customer's country and device type while local context includes information about the touch point the customer has triggered an action from, for example, item category, search query, carousel type, and even the products shown on the page. To represent multiple items, we average their embeddings to produce a single “summary” embedding that is fed into its own position in the encoder. To fuse contextual and customer action sequence data we employ the same attention mechanism by allocating the starting positions for contextual features. This approach does not require additional networks such as deep and wide or deep and cross networks that can substantially increase the parameters of the model. Instead, it makes use of the self-attention mechanism to compute complex interactions between the inputs as every other position in the sequence can attend to the context independently. Local contextual information is concatenated with the representation corresponding to the previous action. We note that concatenation in many cases can be replaced by averaging to avoid large input dimensionality.
§.§ Position Debiasing
In the context of ranking systems, feedback loops occur when a model influences customer interactions which can lead to biased relevance data. Typically, items ranked higher by the model receive more customer attention, causing position bias. Position bias causes a skewed representation of actual customer preferences which in turn may amplify and degrade model performance over time due to the feedback loop. To address this, we incorporate position information as a feature into a model to separate the effect of the position and the true relevance of the probability with which the customer would interact with an item. A debiased model is conditioned on positions during training (right branch in <ref>) and position-independent during serving by setting positional feature to fixed values to counter position bias <cit.>. The position branch is separate from the rest of the model due to the asymmetry between training and serving. We performed additional A/B test which confirmed that adding position debiasing resulted in increasing long-tail utilisation by 5% while not deteriorating engagement metrics.
§ POLICY LAYER
The last stage of the system, the policy layer, is responsible for the final page composition. Here, multiple re-ranked candidate items are combined into one, performing granular, page-level optimization, and applying heuristics, business rules, and filters depending on the use case. In the following, we describe how we promote fresh (new or cold-start) items while simultaneously introducing exploration into the system. We also describe some common heuristics applied in this layer to meet product requirements.
Exploration with New Items. To tackle the cold start problem, the policy layer incorporates fresh items into the organic ranking using exploration heuristics, beginning by sorting these items using content-based features. The blending of outputs from different candidate sources is managed through epsilon-greedy exploration, providing flexibility and ensuring a clear separation of tasks. This method adapts to various use cases by allowing for different exploration techniques and criteria for defining fresh items.
Epsilon-greedy exploration, a staple in reinforcement learning, uses a constant exploration factor that, despite some inefficiencies, functions well in practice and scales to complex scenarios <cit.>. Starting from position k, the policy layer introduces new items with a probability of ϵ and selects from the ranked list with a probability of 1-ϵ, based on a weighted random sampling method determined by the ranking layer. The parameters k and ϵ help balance the introduction of new items against potential disruptions to the user experience (refer to Algorithm <ref> in the Appendix for more details).
Business Heuristics. This section introduces straightforward heuristics addressing i) down-sorting previously purchased items and ii) avoiding perceived lack of diversity. Items with diminishing returns, such as a winter coat purchased again soon after the initial buy, are down-ranked to enhance the customer experience. Instead of modeling the probability of repurchase for these items <cit.>, we apply a simpler rule: any item purchased within the last 2 months is down-ranked. Additionally, to prevent the impression of uniformity when many items from the same brand are shown together, we use a diversification heuristic: if a sequence a_n,…, a_n+k of k same-brand items appears, the first differing-brand item in the subsequent sequence a_n+k+1, …, a_M is relocated to position n+k.
§ MODEL PRODUCTIONIZATION
Although the two-tower model is trained as a single entity, we deployed each tower as separate endpoints. The item tower is triggered when a new model is trained, a new item is added, or an attribute is modified. The newly generated embedding is then transmitted through a Kafka-based intake stream and indexed in ElasticSearch.
One of the most challenging aspects of our infrastructure was updating the embeddings after training a new two-tower model, as it required maintaining consistency between the embedding versions and the tower model versions. When a new model is released, a complete refeed is necessary to index the updated product embeddings. During this refeed, the system continues to operate using the previous versions of the models and product embeddings. The transition between versions is managed through a blue-green deployment strategy.
For scoring items in the candidate generation phase, we utilize Elasticsearch's vector search function, which employs an efficient approximate k-NN search. We also incorporate real-time customer action ingestion, which processes the resulting sequences to compute customer embeddings and feed the ranking layer. A caching layer positioned between these two processes stores the customer sequences, which can be uniformly fed to both models thanks to the uniformity of architecture. This ensures high throughput and maintains low latency during inference.
Model serving is performed on standard CPU-based instances hosted on AWS SageMaker, while training is conducted on multiple GPU instances. The average model response p99 latency for each layer is maintained at 10ms.
§ EXPERIMENTS
In this section, we present the results of extensive online (A/B testing) and offline experimentation and ablation studies using internal datasets. We compare the newly introduced ranking system against the existing baseline models.
§.§ Offline Experiments
§.§.§ Dataset
The offline dataset consists of a sample of item interactions from the last 60 days aggregated by customer id. The item interactions consist of product clicks, add-to-wishlist, add-to-cart, and checkout events attributed to the browse and search premise by using the "last-touch" attribution model. The interactions are joined with the corresponding item ids, their timestamps, and interaction type and sorted by timestamp. To form a single data sample, the resulting sequences are combined with contextual data, specifically, market, device type, browsing category, and search query (if present). The training dataset consisted of 71M unique customers across 25 markets. We do not perform any preprocessing such as deduplication or outlier removal on the obtained customer sequences besides truncating to the last 100 actions. The average sequence length is 24 actions. The evaluation dataset contains 300K customers (<ref> in the Appendix provides histograms per action type). We applied hard temporal split to create the training and test datasets to ensure no data leakage.
§.§.§ Metrics and evaluation protocol
The main metrics that our models are evaluated on are the following:
* Recall@k: proportion of all relevant items within top k items (as defined in <cit.>).
* NDCG@k: measures the effectiveness of a ranking, taking into account the position of relevant items in the ranked list of top-k items, and attributed items are considered as relevant (as defined in <cit.>).
* Diversity: we use the maximum run of consecutive items from the same brand as a proxy to brand diversity. A high value suggests that a ranking can be dominated by items coming from only a few brands. User acceptance testing showed this leads to undesirable customer experience.
* Novelty: we use recall of new items as a proxy for novelty. This metric captures the ability of a ranker to promote new items and address the item cold-start problem.
Our evaluation protocol closely mirrors a real-world production environment by employing a strict separation of training and test data based on time. Customer sequences within the dataset are chronologically ordered. The models undergo evaluation exclusively on the test dataset (with time-based split), which comprises “ground-truth” pages of items that users have either viewed or interacted with following search or browse requests. Importantly, models are provided data only up until the timestamp of each request, with a particular emphasis on adhering to data caching periods—during these times, models operate solely on cached data.
For each item category or search query contained in the requests, the candidate generation models score and ranks all corresponding items in the catalog. The ranking models re-rank the 500 items with highest score coming from the candidate generation layer. We then calculate the offline metrics based on these ranked lists against the above "ground-truth" pages observed in the test data.
The metrics are calculated for each ranking produced for a single test example, and averaged over the test dataset. All reported results are statistically significant (p-value < 0.05) unless stated otherwise. We used a t-test for significance testing.
§.§.§ Candidate Generation
We compare the following methods in the candidate generation layer:
* GBT is a candidate generation model based on Gradient Boosting Trees, which has proven to offer competitive performance compared to neural-based models <cit.>. It ranks items based on their static attributes (season, material, type, etc.) and their dynamic historical engagement rates (e.g. add-to-cart rate for the last 5 min). The LambdaRank objective <cit.> is used during training to up-rank interactions based on their graded relevance ("purchase" has the highest while "click" the lowest relevance). The model's scores are computed in streaming fashion making it highly reactive to trends in customer behavior. This model was our previous production model.
* RCG is our new candidate generation model introduced in <ref>. We test a few versions of this model: RCG_ntr which uses pre-trained visual embeddings of items and includes a bias term that captures item popularity; RCG_tr which employs trainable item embeddings, initialized from the pre-trained visual item embeddings. These two models utilize only global contextual information, such as country and device type. Additionally, RCG_tr+ctx denotes a variant with additional local contextual input — the current user's browsing category and a binary flag whether they browse or search. All the model versions consist of 2 encoder layers with 4 heads, gelu activation, and a max. sequence length of 100. The model was trained for 20 epochs by using the Adam optimizer, with a learning rate set to 0.001. We do not consider hard negatives and rather employ log uniform candidate sampling.
<ref> presents an offline ablation study comparing our RCG model with the baseline GBT used as a candidate generator on the browse use case (<ref> in the Appendix summarizes offline evaluation on the search use case). Across all evaluated customer segments, our newly proposed candidate generator significantly outperforms the existing one.
The model variant incorporating trainable item embeddings achieves markedly improved performance in offline metrics. However, it is important to note that trainable item embeddings exacerbate the item cold-start problem (to mitigate this issue, we introduce exploration strategies for new items in <ref>).
Incorporating local contextual inputs further boosts the performance of the RCG_tr model. Specifically, by including item category and search query presence, the RCG_tr+ctx model's Recall@500 is enhanced by an additional 10%, and its NDCG by 14%. In Section <ref> we will show that these gains are also reflected in our online experiments.
§.§.§ Ranking Layer
We compare the following methods in the ranking layer:
* WDL-ATT is a wide & deep neural network with an attention mechanism between the customer action sequences and ranking candidates. It scores the candidate items by applying a dot product between context, user, and item embeddings that are all trainable and have 128 dimensions. It employs a loss function directly optimizing for the NDCG metric <cit.>. Items are represented by using pre-trained embeddings that encode brand, category, pattern, and other visual cues. It is trained for 2 epochs by using the Adam optimizer with a learning rate of 0.0002.
* RL is our new ranking model introduced in <ref>. d_model and the model output is set to 128, the model consists of 2 encoder layers with 8 heads, relu activation, and a max. sequence length of 80. The model is trained for 2 epochs by using the Adam optimizer, with a learning rate of 0.001. The ratio of negatives vs. positives is set to 4. Since sampling negatives only from items that were in the view-port lead to degradation of performance, we sampled from all non-interacted items in the page.
* BST is the Behavior Sequence Transformer introduced in <cit.>. We use the same inputs and hyperparameters as in RL.
It should be noted that all compared algorithms use the same near real-time serving infrastructure. Some of the other algorithms mentioned, such as TransAct <cit.>, while potentially competitive, were not applicable to our use case or latency constraints. For simplicity, we used a single ranking model in the ranking layer. In these experiments, we focus on the NDCG metric for "high-value actions" or HVAs, which, in our context, are add-to-wishlist and add-to-cart actions. This metric acts as a proxy for our success KPI, defined by customer engagement wrt. HVAs, as detailed in the subsequent section.
The offline evaluation results are summarized in <ref>, comparing RL and BST against the existing WDL-ATT which proved to be a strong baseline. The NDCG metric indicates that the new model effectively prioritizes relevant items higher up in the rankings, both at the top of the list (k=6) and across the entire first page (k=84). Furthermore, RL favored the promotion of new items while enhancing diversity. In terms of relevance, BST lagged behind both algorithms, although it performed the best when it comes to diversity.
<ref> presents an ablation study that explains the contribution of each component and input type to the model's performance. The removal of any of these elements significantly detracts from overall model efficacy. Particularly noticeable is the performance decline when contextual inputs are not integrated early in the encoder. This finding suggests that our model more effectively leverages contextual information alongside rich item representations in customer action sequences compared to other algorithms such as BST. We opted not to benchmark against the DCN method cited in <cit.>, as self-attention offers a superior mechanism for capturing feature interactions within sequential data compared to feature crosses. Additionally, the ablation study demonstrates that omitting heterogeneous inputs markedly diminishes the model's performance. Specifically, excluding contextual inputs results in more than a 10% decrease in NDCG@6, while complete removal of item metadata leads to a drastic 26% reduction in NDCG@6.
§.§ Online Experiments
We have conducted several online A/B tests on real-world ranking use cases at Zalando. These tests were carried out systematically, replacing one component at a time to evaluate its impact. All tests allocated equal traffic splits among variants over a few weeks, as necessary, to achieve the minimum detectable effect for the success KPI with a p-value< 0.05. Each model was retrained and deployed daily. Beyond customer engagement, our evaluation of online performance encompasses a variety of exploratory metrics, including financial metrics, the capacity to promote new items (novelty), and ranking diversity (as defined in <ref> as well as in the number of distinct brands on the first page). <ref> summarizes the results from a series of A/B tests performed on the browse use case, where a customer is browsing the category tree as described in <ref>.
In the first A/B test in <ref> (row 1), we compared the new candidate generation model (RCG) against the previous ranking system end-to-end (GBT + WDL-ATT). We examined two variants: firstly, the RCG model with trainable item embeddings (RCGtr) and, secondly, the RCG model with non-trainable item embeddings (RCGntr) paired with WDL-ATT as the ranker. The former variant demonstrated a significant increase in engagement. However, the new candidate generation model had no significant impact on financial KPIs, promoted fewer new items, and decreased diversity.
In the second A/B test in <ref> (row 2), we assessed the effect of adding our new ranking layer (RL). The baseline for this experiment was the winning variant from the first test, RCG_tr. We explored two variants: the first employed the previous ranking algorithm WDL-ATT, and the second introduced our new ranking algorithm RL along with the policy layer. The outcomes highlight the advantages of applying our powerful ranking model on candidates from the candidate generation layer. The second variant significantly improved all monitored KPIs across all customer segments, including net revenue per customer, novelty, diversity and retention.
In the third A/B test in <ref> (row 3), we tested an improved version of the candidate generation model aka RCG_tr+ctx, which includes local contextual data. This resulted in a significant uplift in all monitored KPIs, including brand, and categorical diversity (omitted due to space limitations). This A/B test demonstrates the importance of including data that captures the entire customer journey to provide a more contextually relevant ranking.
<ref> summarizes the results of a series A/B tests performed on the search use case, i.e., when the customer is using full-text search to find their desired item. The conclusions are similar and for brevity, we omit the details.
§ CONCLUSION
In this paper, we have introduced a flexible, scalable, steerable, and real-time ranking platform that has been proven to enhance customer experience by delivering more relevant and personalized content. This approach has, in turn, led to improvements in various customer-centric and business metrics. We have described the architecture of our ranking platform, adhering to a set of design principles and utilizing state-of-the-art models. We have also offered insights into their performance, highlighting the considerable advantages of integrating heterogeneous signals and inputs that encompass the entire customer journey, as well as the effectiveness of fine-tuning input embeddings to boost model performance.
Our offline and online evaluations clearly demonstrate that our proposed system not only significantly outperforms existing solutions by a wide margin (10-40% improvement in offline evaluation metrics and a 15% combined engagement and +2.2% revenue uplift in 4 online A/B tests) but also excels in real-life use cases and scales effectively under heavy load, which is a crucial requirement for large e-commerce platforms. Furthermore, we illustrate that the enhanced experience benefits both returning and new customers. Lastly, we provide valuable insights and practical guidance for application by other applied scientists and practitioners within the domain.
ACM-Reference-Format
§ APPENDIX
§.§ Exploration with New Items
To promote new items that may suffer from the cold start problem, we follow the Algorithm <ref> to combine organic ranking results (S) coming from retrieved candidates by e.g. RCG, with new items (N). The algorithm is controlled by k and ϵ.
§.§ Data distribution
Our datasets are based on actions that customers perform against items on the platform. In <ref> we show the distribution of those actions, computed from raw customer data, before any dataset-specific preprocessing, e.g. trimming. Clicks have the biggest volume, then add-to-wishlist, add-to-cart, and purchases.
§.§ Candidate Generation Results on the Search Use Case
<ref> presents offline evaluation results of the RCG model on the search use case only. The same improvements seen before on the browse traffic apply to the search traffic.
§.§ Expressive Power of Two-Tower Models
The two major recommendation models used in our ranking platform have a two-tower architecture in which one tower embeds the customer and the other one embeds the fashion article that is being scored (see Section <ref> and Section <ref>). Mathematically, the score function f^model corresponding to a model of this type can be written as
f^model(x) = ⟨φ(c), ψ(a) ⟩,
where x = (c, a) is the model input with c and a being the customer and the article parts of the input, respectively.
In this section, we study the expressive power of this model class. Specifically, we prove that any continuous target function f (defined on a bounded feature space) can be approximated by a score function of the form (<ref>) provided the embedding size is large enough.
Let the range of customer and article features be bounded: C^ℓ_i ≤ c_i ≤ C^u_i, i=1, …, k_c, and A^ℓ_j ≤ a_j ≤ A^u_j, j=1, …, k_a, and let the target function f be continuous on the feature domain
D = [C^ℓ_1, C^u_1]×…× [C^ℓ_k_c, C^u_k_c]× [A^ℓ_1, A^u_1]×…× [A^ℓ_k_a, A^u_k_a].
Then for any ε > 0, there exist n>0 and transformations φR^k_c↦R^n and ψR^k_a↦R^n such that
max_(c, a)∈D|f(c, a) - ⟨φ(c), ψ(a) ⟩| < ε.
Without loss of generality, let us assume that D is a unit cube, i.e. C^ℓ_i = 0, C^u_i = 1 for all i=1, …, k_c and A^ℓ_j = 0, A^u_j = 1 for all j=1, …, k_a.
Consider the set of all multivariate polynomial functions on D. Note that it (a) contains constant functions, (b) is closed under the operations of addition and multiplication, and (c) separates points: for any u, v∈D, u≠ v, there exists a polynomial P such that P(u)≠ P(v). Then by applying the Stone-Weierstrass theorem, we conclude that for any ε > 0, there exists a polynomial P_ε,
P_ε = ∑_m=1^n
α_m ∏_i=1^k_c c^p_m,
,i_i ∏_j=1^k_a a^q_m,
,j_j,
such that
max_(c, a)∈D|f(c, a) - P_ε(c, a)| < ε.
By defining
[ φ(c) = (
α_1 ∏_i=1^k_c c^p_1, i_i, …, α_n∏_i=1^k_c c^p_n, i_i),; ψ(a) = (∏_j=1^k_a a^q_1, j_j, …, ∏_j=1^k_a a^q_n, j_j), ]
we can rewrite (<ref>) as
P_ε(c, a) = ⟨φ(c), ψ(a) ⟩, (a, c)∈D.
Then (<ref>) implies that the constructed transformations φ and ψ satisfy (<ref>).
|
http://arxiv.org/abs/2409.02757v1 | 20240904143506 | V-Words, Lyndon Words and Galois Words | [
"Jacqueline W. Daykin",
"Neerja Mhaskar",
"W. F. Smyth"
] | cs.DS | [
"cs.DS"
] |
Daykin et al.
Department of Computer Science, Aberystwyth University, Wales
[email protected] Department of Computer Science, Stellenbosch University, South Africa Univ Rouen Normandie, INSA Rouen Normandie, Université Le Havre Normandie, Normandie Univ, LITIS UR 4108, F-76000 Rouen, France
Department of Computing and Software, McMaster University, Canada
{pophlin,smyth}@mcmaster.ca
V-Words, Lyndon Words and Galois Words
A preliminary version of this paper was presented at COCOA 2023. The main new contributions in this extended version are
Section <ref> and
Section <ref>.
Jacqueline W. Daykin1,2,3 Neerja Mhaskar4Corresponding author. W. F. Smyth4
September 9, 2024
===========================================================================================================================================================================================================
§ ABSTRACT
We say that a family 𝒲 of strings over Σ^+ forms a Unique Maximal Factorization Family (UMFF) if and only if every w∈𝒲 has a unique maximal factorization. Further, an UMFF 𝒲 is called a circ-UMFF whenever it contains exactly one rotation of every primitive string x∈Σ^+.
V-order is a non-lexicographical total ordering on strings that determines a circ-UMFF. In this paper we propose a generalization of circ-UMFF called the substring circ-UMFF and extend combinatorial research on V-order by investigating connections to Lyndon words. Then we extend these concepts to any total order. Applications of this research arise in efficient text indexing, compression, and search problems.
§ INTRODUCTION
V-order (Definition <ref>) is a non-lexicographic global order on strings that was introduced more than a quarter-century ago <cit.>. Similar to conventional lexicographical order (lexorder), V-order string comparison can be performed using a simple linear time, constant space algorithm <cit.>, further improved in <cit.>. Much theoretical research has been done on this ordering <cit.>, including efficient construction of the so-called V-BWT or V-transform <cit.>, a variant of the lexicographic Burrows-Wheeler transform (BWT).
In this paper, we further extend combinatorial research on V-order and circ-UMFFs. We first show that there are infinitely more V-words (Definition <ref>) than Lyndon words (Definition <ref>). Then we study instances of circ-UMFFs having similar properties to V-words and/or Lyndon words. Finally, we propose a generalization of the circ-UMFF (Definition <ref>) called the substring circ-UMFF (Definition <ref>) and show that for a generalized order 𝒯, with order relation ≪, classes of border-free words exist that form
circ-UMFFs and substring circ-UMFFs, respectively.
A useful tool in this work is a generalization of lexorder, which is based on letter comparisons, to lex-extension order, based on substring comparisons, first introduced in <cit.>. This in turn allows generalizing Lyndon words to Hybrid Lyndon words <cit.>, which we apply here.
Using the framework of UMFFs, we explore new properties of Galois words. These words are defined analogously to Lyndon words except that, rather than lexorder, alternating lexicographic order (alternating lexorder) is applied. Starting with the relation <, the alternating variant processes letter comparisons from left to right alternating between the relations < and >. While Lyndon words are necessarily border-free, the Galois variant defines both border-free and bordered words. Galois words are studied combinatorially in <cit.> and subsequently from an algorithmic perspective including practical applications in <cit.>.
All words in a circ-UMFF are border-free; thus in the general case Galois words do not form a circ-UMFF; indeed, examples show non-unique factorizations of strings into Galois words. Nevertheless, we show that the subset of border-free Galois words forms an UMFF. Further, we characterize the structure of binary border-free Galois words in terms of a Hybrid Lyndon factorization. We derive a Galois equivalent of the fundamental Lyndon result that the ordered concatenation of Lyndon words forms a Lyndon word — the Galois version requires the concatenated word to be primitive.
In <cit.> it is established that any binary border-free UMFF can be enlarged to a binary circ-UMFF. In view of this result, we propose here a constructive method in the finite case for generating a binary circ-UMFF from a border-free binary UMFF. We illustrate the algorithm by constructing an example of a circ-UMFF which consists of strings based on more than one method of string ordering — a discovery we believe worthy of further investigation.
§ PRELIMINARIES
A string (or word) is an array of elements drawn from a finite totally ordered set Σ of cardinality σ = |Σ|, called the alphabet. The elements of Σ are referred to as characters (letters). We refer to strings using mathbold: x, w instead of x,w. The length of a string w[1..n] is |w|=n. The empty string
of length zero is denoted by ε. The set of all nonempty strings over the alphabet Σ is denoted by Σ^+, with Σ^* = Σ^+ ∪ε.
If x = uwv for (possibly empty) strings u,w,v∈Σ^∗,
then u is a prefix, w a substring or factor,
and v a suffix of x. A substring u of w is said to be proper if |u| < |w|.
A string w has a border u if u is both a proper prefix and a proper suffix of w.
If w has only the empty border ε, then it is said to be border-free.
For x = x[1..n]
and an integer sequence 0 < i_1 < i_2 < ⋯ < i_k ≤ n,
the string y = x[i_1]x[i_2] ⋯x[i_k] is said to be a
subsequence of x, proper if |y| < n.
If x = u^k (a concatenation of k copies of u)
for some nonempty string u and some integer k > 1,
then x is said to be a repetition;
otherwise, x is primitive. We say x has period p if and only if for every i ∈ 1..n-p,
x[i] = x[i+p]; the shortest period of x is called the period.
A string y=R_i(x) is the i^ conjugate (or rotation) of x=x[1..n] if
y = x[i+1..n]x[1..i] for some 0 ≤ i < n (so that R_0(x) =x).
The conjugacy class of x is the set R_i(x), 0 ≤ i < n, of all conjugates.
In our examples, we often suppose that Σ = {a,b,c,…,z}, the Roman alphabet in its natural order, or Σ = {1,2,3,…,k}, the bounded natural numbers. The ordering of Σ imposes lexicographic order (lexorder) on Σ^+.
However, in the case that x = u^k, a repetition, lexorder does not provide a unique ordering of the rotations of x: rotations R_k, R_2k,…, R_|x| are all equal.
To avoid this, we can append to each x a unique least symbol $, so that all the rotations of x$ are distinct and thus ordered, while the ordering of any pair x$, y$ is unaffected.
The rotations of x, sorted in ascending lexicographic order, form the Burrows-Wheeler matrix, so that, for
x = abab$, we get the unique ordering
[ $ a b a b; a b $ a b; a b a b $; b $ a b a; b a b $ a ]
of which the last column, bb$aa, is the Burrows-Wheeler transform (BWT) of x, from which the original string can be recovered in linear time <cit.>. Observe the transformation of the input in the example, abab$ to b^2 $ a^2, and this data clustering property is often exhibited with the BWT, and hence its use as a preprocessor or booster for the performance of memoryless compressors. While the BWT was originally introduced in the context of lossless text compression, subsequent applications span image compression, shape analysis in computer vision, efficient self-indexed compressed data structures, and its pattern matching properties are invaluable in bioinformatics with its highly repetitive data for tasks such as sequence alignment <cit.>.
We now define another fundamental concept in this paper, the Lyndon word, which has deep connections with the theory of free Lie algebras and combinatorics on words:
A Lyndon word <cit.> is a primitive string
that is minimum in lexorder < over its conjugacy class.
The following Lyndon factorization (LF) theorem is fundamental in stringology
and underpins the wide-ranging applications of Lyndon words,
which include specialized string sorting tasks, digital geometry, musicology, the Burrows-Wheeler transform and data compression techniques:
<cit.> Any nonempty string x can be written uniquely as a product
_x = x = u_1 u_2 ⋯u_k of k ≥ 1 Lyndon words,
with (u_1 ≥u_2 ≥⋯≥u_k).
For further stringological definitions and theory, see <cit.>.
§ V-ORDER
In this section we start by defining V-order and describing some of its important properties used later in the paper.
Let x=x_1x_2⋯ x_n be a
string over Σ. Define h ∈{1,…,n} by h = 1 if x_1 ≤ x_2 ≤⋯≤ x_n; otherwise, by the unique value such that x_h-1>x_h ≤ x_h+1≤ x_h+2≤⋯≤ x_n. Let x^*=x_1x_2⋯
x_h-1x_h+1⋯ x_n, where
the star * indicates deletion of x_h. Write x^s* =
(...(x^*)^*...)^* with s ≥ 0 stars.
Let g = max{x_1,x_2, … ,x_n},
and let k be the number of occurrences of
g in x.
Then the sequence x,x^*,x^2*, ...
ends g^k,...,g^1,g^0=ε.
From all strings x over Σ we form the star tree (see Example <ref>), where each string
x labels a vertex and there is a directed edge upward from x
to x^*, with the empty string ε as the root.
We define V-order ≺ for distinct strings x, y.
First x≺y if in the star tree x is in the path
y,y^*,y^2*, … ,ε. If x,y are not in a path, there exist
smallest s,t such that x^(s+1)*=y^(t+1)*. Let s=x^s* and
t=y^t*; then s≠t but |s| = |t| = m say.
Let j ∈ [1..m] be the greatest integer such that s[j] t[j].
If s[j]<t[j] in Σ then x≺y;
otherwise, y≺x.
Clearly ≺ is a total
order on all strings in Σ^∗.
See the star tree path and star tree examples in
Figures <ref> and <ref>, respectively.
[Star tree path] Figure <ref> illustrates the star tree for the case x≺y if in the star tree x is in the path
y,y^*,y^2*, … ,ε. Consider the
V-order comparison of the strings x = 929 and y = 922911. The subscript h indicates the V letter to be deleted (defined above as x_h-1>x_h ≤ x_h+1≤ x_h+2≤⋯≤ x_n). Since 929 is in the path of star deletions of 922911, therefore 929 ≺ 922911.
[Star tree] Figure <ref> illustrates the star tree for the non-path case using the V-order comparison of the words x = unique and y = equitant. As in the previous example, the subscript h indicates the V letter to be deleted (defined above as x_h-1>x_h ≤ x_h+1≤ x_h+2≤⋯≤ x_n). The circled letters are those compared in alphabetic order (defined above as s[j] t[j]).
We now describe a canonical form of a string known as V-form which partitions a string according to its largest letter.
The V-form of any given string x is
V_k(x) = x = x_0gx_1g⋯x_k-1gx_k,
where g is the largest letter in x — thus we suppose that g occurs exactly k times.
Note that any x_i may be the empty string ε. We write ℒ_x=g,
𝒞_x=k.
<cit.> Suppose we are given distinct strings
x and y with corresponding V-forms
x = x_0 _xx_1 _xx_2 ⋯x_j-1_xx_j,
y = y_0 _yy_1 _yy_2 ⋯y_k-1_yy_k,
where j = 𝒞_x, k = 𝒞_y.
Let h ∈ 0 .. max(j,k) be the least integer such that x_h
≠y_h. Then x≺y if and only if one of the
following conditions holds:
(C1) _x < _y
(C2) _x = _y and
𝒞_x < 𝒞_y
(C3) _x = _y,
𝒞_x = 𝒞_y
and x_h ≺y_h.
<cit.> For given strings x and y,
if y is a proper subsequence of x,
then y≺x.
For instance, given x=7547223,
Lemma <ref> states that 772 ≺ 7547223.
Furthermore, another consequence of Lemma <ref>
is that shorter suffixes are suffixes of longer suffixes; that is, they occur from the shortest to the longest in increasing order. So for x,
the V-order of the suffixes is
just 3 ≺ 23 ≺ 223 ≺ 7223 ≺ 47223 ≺ 547223 ≺ 7547223.
Thus in V-order suffix sorting is trivial,
in contrast to lexorder, where over the years numerous non-trivial (though linear) algorithms have been proposed <cit.>.
Furthermore, string comparison is fast in V-order:
<cit.> V-order comparison of given strings x and y requires linear time and constant space.
We now introduce the V-order equivalent of the lexorder Lyndon word:
A string x over an ordered alphabet Σ is a V-word if it is the unique minimum in V-order ≺
over the conjugacy class of
x.
Thus, like a Lyndon word, a V-word is necessarily primitive.
[≺]
We can apply Definition <ref>, equivalently the methodology of Lemma <ref>, to conclude that
6263 ≺ 6362 ≺ 2636 ≺ 3626,
so that 6263 is a V-word,
while on the other hand 2636 is a Lyndon word.
Similarly,
62626263 and 929493 are V-words,
while conjugates 26262636 and 294939 are Lyndon words.
We now define another important ordering:
Suppose u and v are V-words on an ordered alphabet Σ.
If uv is also a V-word, then we write u <_𝒱v;
if not, then
u≥_𝒱v.
Thus, corresponding to the Lyndon factorization into Lyndon words
using ≥ (Theorem <ref>),
we arrive at a V-order factorization expressed
in terms of V-word order ≥_𝒱:
<cit.>
[V-order Factorization]
Using only linear time and space (see Algorithm VF in <cit.>[
VF is an on-line algorithm as it outputs the V-word factors in order from left to right without any backtracking. However, this fact was not explicitly stated in the reference.]),
a string x can be factored uniquely, using V-word order, into V-words
x = x_1x_2⋯x_m,
where x_1≥_𝒱x_2≥_𝒱⋯≥_𝒱x_m.
For x = 33132421, the Lyndon decomposition (computed using lexorder) is 3 ≥ 3 ≥ 13242 ≥ 1,
while the V-order factorization identifies nonextendible V-words 33132 and 421 with
33132 ≥_𝒱 421.
(Note however that 33132 ≺ 421!
See <cit.> for more background on this phenomenon.)
Similarly, from Example <ref>,
the string
x = uvw = (6263)(62626263)(929493)
has the unique V-order factorization u≥_𝒱v≥_𝒱w,
even though u≺v≺w.
It will also be useful to order strings x, y
based on a lexicographic approach to their
factorizations into identified substrings; this will be applied in Section <ref> to handle string factorization based not on letters but substrings.
We call this ordering, denoted
≺_LEX(F), lex-extension order, expressed here with respect to substring
ordering using ≺ — but note that other substring ordering methodologies could instead be applied.
Suppose that, according to some factorization F, two strings x, y∈Σ^+ are expressed in terms of nonempty factors:
x = x_1x_2⋯x_m, y = y_1y_2⋯y_n.
Then x≺_LEX(F)y if and only if one of the following holds:
(1) x is a proper prefix of y (that is, x_i = y_i for 1 ≤ i ≤ m < n); or
(2)
for some least i ∈ 1..min(m,n), x_j = y_j for j = 1,2,…,i 1, and x_i≺y_i.
We assume throughout that, when using lex-extension order, the factorization F is given in V-form (Definition
<ref>). That is, if the V-form of x is
x = x'_0_xx'_1_xx'_2⋯x'_k-1_xx'_k,
then the corresponding
lex-extension order factors are:
x_1 = x'_0, x_2 = _xx'_1, ⋯,
x_k+1 = _xx'_k.
Depending on the context, it could be that x'_0 = ε, in which case x_1 = _xx'_1, x_2 = _xx_2, and so on.
Thus, in order to V-order two strings with identical values,
we first compute the
V-form factorization of each string, then treat each of the resulting factors as a single entry (a “letter”), and so determine the “lexicographic” order of the given strings by comparing the factors from left to right using ≺ (Lemma <ref> (C3) defines comparison of
strings that are conjugates).
§ UMFF AND CIRC-UMFF THEORY
Motivated by classical Lyndon words, investigations into combinatorial aspects of the factoring and concatenation of strings led to the concepts of UMFF and circ-UMFF <cit.>,
whose properties we overview here and apply in Sections <ref>, <ref> and <ref>.
For given x = x[1..n] ∈Σ^+, if x = w_1w_2⋯ w_k, 1 ≤ k ≤ n,
then w_1w_2⋯ w_k
is said to be a factorization of x;
moreover, if every factor w_j, 1 ≤ j ≤ k, belongs to a specified
set 𝒲,
then w_1w_2⋯ w_k
is said to be a factorization
of x over 𝒲, denoted by F_𝒲(x).
A subset 𝒲⊆Σ^+ is a
factorization family (FF) of Σ if
for every nonempty string
x on Σ there exists a factorization F_𝒲(x). If for every j = 1,2,...,k, every factor w_j is of maximum length, then the factorization F_𝒲(x) is unique and said to be maximal.
To show that not every factorization is necessarily maximal, consider the FF
𝒲 = {a,b,c,d,ab,cd,bcd}.
Then, for x = abcd, we get three possible factorizations, (a)(b)(c)(d) and (ab)(cd), and also (a)(bcd), depending on whether we process x in a forward or backward direction. But therefore, in each case, not every factor can be of maximum length and so FF is not maximal.
Observe that every FF must contain every element of Σ; moreover,
any subset of Σ^+ containing every element of Σ is necessarily an FF.
Let 𝒲 be an FF on an alphabet Σ.
Then 𝒲 is a unique maximal factorization family (UMFF)
if and only if there exists a
maximal factorization F_𝒲(x)
for every string x∈Σ^+.
The following characterization of UMFFs shows that there can be no overlapping factors in a unique maximal factorization of a string:
(The 𝐱𝐲𝐳 Lemma <cit.>)
An FF 𝒲 is an UMFF if and only if
whenever xy,yz∈𝒲 for some nonempty y,
then xyz∈𝒲.
Note that, although the Fibonacci words b,a,ab,aba,abaab,abaababa⋯ are clearly an example of an FF, nevertheless they do not by Lemma <ref> constitute an UMFF: x = abaab, y = aba, z = ab are all
Fibonacci words,
as are xy and yz, but xyz is not.
We next show that an FF that contains no overlapping factors — such as the string y in Lemma <ref> — necessarily forms an UMFF.
Let 𝒲 be an FF on Σ. If for every distinct u,v∈𝒲 with |u|, |v|>1, uv is border-free,
then 𝒲 is an UMFF.
By definition of FF, 𝒲 must contain all the letters in Σ, which clearly do not overlap.
Consider factoring some string x = x_1 x_2 … x_n, n>1, maximally over 𝒲, so that x = f_1f_2⋯f_m, where
no factor f_i, 1 ≤ i ≤ m, can be extended either left or right. Specifically,
if f_i = x_p … x_q, 1 ≤ p ≤ q ≤ n, then x_p-j… x_q+k∉𝒲 for any positive j < p, k < n+1-q.
The non-extendability of all factors ensures maximality and thus uniqueness of the factorization f_1f_2⋯f_m, and so we conclude that 𝒲 forms an UMFF.
For example, with Σ = {0,1}, Lemma <ref> tells us that 𝒲={0,1,010} must be an UMFF. On the other hand, the example FF
𝒲 = {0,1,010,01010,0101010, …} from <cit.>, with an infinity of bordered uv, is also an UMFF, showing that the converse of Lemma <ref> certainly does not hold.
Indeed, observe from the following example that neither primitiveness nor the border-free property guarantees that a set of words forms an UMFF:
Suppose that 𝒲 is an UMFF over Σ = {a,b,c} such that
{a,b,c, ab, abc, cab }⊆𝒲. Then consider applying the xyz Lemma <ref> twice to words in 𝒲 as follows:
(i) For x = z = ab and y = c, we find the bordered word xyz = abcab ∈𝒲.
(ii) For x = abc, z = c and y = ab, we find the repetition xyz =(abc)^2 is also in 𝒲.
Interestingly, known UMFFs in the literature, such as Lyndon words and V-words, are specified as being necessarily primitive and they also satisfy the border-free property.
An important class of UMFFs can now be specified:
An UMFF 𝒲 over Σ^+ is a circ-UMFF if and only if
it contains exactly one rotation of every primitive string x∈Σ^+.
Observe that the definition of UMFF does not require that Σ be ordered. Nor does the circ-UMFF 𝒲, but it does require that the ordering of any two distinct strings in 𝒲 depends only on their concatenation (that may or may not occur).
Thus a circ-UMFF 𝒲 may be said to specifiy a “concatenation order”:
(<cit.>)
If a circ-UMFF 𝒲 contains strings u,v and uv, we write u <_𝒲v (called 𝒲-order).
Observe that V-word order (Definition <ref>), also defined in terms of concatenation, is formally equivalent to
𝒲-order.
Structural properties of circ-UMFFs are summarized as follows:
(<cit.>) Let 𝒲 be a circ-UMFF.
(1) If u∈𝒲 then u is
border-free.
(2) If u,v∈𝒲 and u≠v then uv is primitive.
(3) If u,v∈𝒲 and u≠v then uv∈𝒲 or vu∈𝒲 (but not both).
(4) If u,v, uv∈𝒲
then u <_𝒲v,
and <_𝒲 is a
total order on 𝒲.
(5) If w∈𝒲 and |w| ≥ 2 then there
exist u,v∈𝒲 with w=uv.
The first known circ-UMFF is believed to be the set of Lyndon words,
whose specific 𝒲-order is lexorder; that is, the usual ordering of the strings of Σ^+ is used to obtain Lyndon words.
Formally, the Lyndon circ-UMFF applies the same lexicographic ordering as both Σ^+ and 𝒲-order:
(<cit.>) Let ℒ be the set of Lyndon words, and suppose u, v∈ℒ. Then uv∈ℒ if and only if u precedes v in lexorder.
Note that, from Definition <ref>, V-order factorization determines an UMFF, which, by Definitions <ref> and <ref>, is a circ-UMFF.
§ V-WORDS, LYNDON WORDS AND CIRC-UMFFS
In this section we investigate further the relationship and differences between Lyndon and V-words and introduce generalized words over any total order.
We begin with an observation made
in <cit.>, that follows immediately from Duval's fundamental Theorem <ref> <cit.>:
Let Σ^*_lex denote the lexicographic total ordering of Σ^*. Then the lexordered set ℒ of Lyndon words is a
suborder of Σ^*_lex.
However, observe that there is no corresponding architecture for V-words. In V-ordered Σ^*, for x = 21, y = 31, we have x≺y by Lemma <ref> (C1), while in the class of V-words we have
x≥_𝒱y by Definition <ref> of V-word order. For further details on the distinction between ≺ and ≥_𝒱 see Lemma 3.16 in <cit.>.
Lyndon words and V-words are generally distinct <cit.>. For instance, the integer string 1236465123111 factors into Lyndon words (12316465)(123)(1)(1)(1) and into V-words (1)(2)(3)(6465123111) — no correspondence whatsoever. Nevertheless,
when substrings are restricted to a single letter, a rather remarkable result holds, which is a newly observed special case of Theorem 4.1 in <cit.> and leads to the concept of V-Lyndons:
[V-Lyndons]
Suppose x has a V-form x = _xx_1 _xx_2 ⋯x_j-1_xx_j,
where x_0 = ε and
|x_l| = 1 for 1 ≤ l ≤ j.
Let
x' = x_1 x_2 ⋯x_j-1x_j. Then
x is a V-word if and only if x' is a Lyndon word.
To see that the requirement
|x_l|=1 is necessary,
consider x = 321312 with
_x = 3,
|x_1| = |x_2|= 2.
Certainly
x' = x_1x_2 = 2112 is
not a Lyndon word, but since x≺ 312321,
x is a V-word. Thus Lemma <ref> does not generalize to V-form substrings with
|x_l| > 1.
Nonetheless, there does exist a kind of reciprocity between infinite classes of Lyndon words and V-words:
For any Lyndon word x[1 .. n], n ≥ 2, on ordered alphabet Σ:
(1) If
_x is the largest letter in x, then (')^kx is a V-word for ' > _x and every integer k > 0.
(2) If
ℓ_𝓍 is the smallest
letter in x,
then (ℓ_x)^kx is a Lyndon word for every integer k > 0.
Building on Lemma <ref> and Observation <ref>(1), we can show that there are infinitely more V-words
than there are Lyndon words:
Suppose that ℓ[1 .. n] is a Lyndon word over an ordered alphabet Σ and further that there exists _ℓ∈Σ such that _ℓ > ℓ[i] for every i ∈ 1 .. n.
Then we can construct infinitely many V-words from ℓ over Σ.
For the first V-word v_1, applying
Lemma <ref>,
we rewrite ℓ as v_1 [1 .. 2n] where for i ∈ 1 .. 2n, if i is odd, v_1[i] = _ℓ, while if i is even, v_1[i] = ℓ[i/2]; that is, v_1 = _ℓℓ[1] _ℓℓ[2] .. _ℓℓ[n].
For V-words v_h, h > 1, rewrite ℓ as v_h = _ℓℓ[1]^h _ℓℓ[2]^h .. _ℓℓ[n]^h. Lemma <ref> (C1) shows that if a ≺ b for letters a,b (that is, a<b in Σ), then a^h ≺ b^h and hence the Lyndon property (Lemma <ref>) of ℓ is preserved for v_h using Definition <ref> for lex-extension order of strings.
It might then be natural to suppose that V-words exhibit the same structural properties as Lyndon words and support equivalent string operations. For instance, a defining property of Lyndon words is that they are strictly less in lexorder than any of their proper suffixes; that is, for a Lyndon word ℓ = p_ℓs_ℓ, with p_ℓ,s_ℓ≠ϵ, we have
ℓ < s_ℓ < s_ℓp_ℓ.
This central Lyndon property relates to two important operations on strings: ordering and concatenation. For Lyndon words, these operations
are consistent with respect to lexorder: that is, for every proper suffix s_ℓ of ℓ, by virtue of the ordering ℓ < s_ℓ,
we can construct a Lyndon word ℓs_ℓ^h by concatenation for every h ≥ 1.
In contrast, for V-order, these operations are not necessarily consistent. First, by Lemma <ref>, a proper suffix u of a string x
is less than x in V-order; thus,
for example, given the V-word
v = 43214123, even though substrings 23 ≺v and 4123 ≺v, on the other hand, by definition of a V-word, v≺ 41234321.
So for a V-word v = p_vs_v, with p_v,s_v≠ϵ,
we have
s_v≺v≺s_vp_v.
Nevertheless, like a Lyndon word, a V-word can be concatenated with any of its proper suffixes (although they are less in V-order) to form a larger V-word (Lemma 3.21 in <cit.>).
Hence we are interested in those combinatorial properties related to operations like concatenation and indexing in conjugacy classes which hold both for Lyndon words and V-words. Examples include border-freeness, existence of uv and vu in the conjugacy class where u and v are Lyndon words, and the FM-index Last First mapping property <cit.>.
More generally, it is intriguing to explore similarities and differences between instances of circ-UMFFs, as discussed below.
We begin by introducing a general form of order, 𝒯:
(𝒯-order)
Let 𝒯 be any total ordering of Σ^* with order relation ≪ so that given distinct strings x, y they can be ordered deterministically with the relation ≪: either x≪y or y≪x.
So for Lyndon words (V-words) the ordering 𝒯 is lexorder (V-order) and the corresponding order relation ≪ is < (≺). Using the general order 𝒯, we can extend Definitions <ref>/<ref> from Lyndon words/V-words to 𝒯_≪-words:
(𝒯_≪-word)
A string x over an ordered alphabet Σ is said to be a 𝒯_≪-word if it is the unique minimum in 𝒯-order ≪
in the conjugacy class of x.
Similarly, Definition <ref> (lex-extension order) can be generalized by replacing the order ≺ by 𝒯-order ≪:
For a factorization F, let x, y∈Σ^+ be two strings expressed in terms of nonempty factors:
x = x_1x_2⋯x_m, y = y_1y_2⋯y_n.
Then x≪_LEX(F)y if and only if:
(1) x is a proper prefix of y or
(2)
for some least i ∈ 1..min(m,n), x_j = y_j for j = 1,2,…,i 1, and x_i≪y_i.
Applications of circ-UMFFs in the literature arise in linear-time variants of the Burrows-Wheeler transform: the V-order based transform V-BWT <cit.>; the binary Rouen transform B-BWT derived from binary block order which generated twin transforms <cit.>; the degenerate transform D-BWT for indeterminate strings implemented with lex-extension order <cit.> which supports backward search <cit.>. These instances stimulate the quest for new circ-UMFFs and we pave the way for this by next introducing a generalization of circ-UMFFs.
§ SUBSTRING CIRC-UMFF: GENERALIZATION OF CIRC-UMFF
Definitions <ref> and <ref> encourage considering conjugacy classes for substrings
rather than individual letters;
that is, each
conjugate is defined by a rotation with prefix _xx_i,
as follows:
Suppose that a string x = x[1..n]
over an ordered alphabet Σ, with maximal letter _x,
is expressed
in V-form as
_xx_1 _xx_2 ⋯x_j-1_xx_j,
with x_0 = ε.
Then for every
1 ≤ t ≤ j,
y = ℛ_t(x) = _xx_t+1_xx_t+2⋯x_j-1_xx_j(_xx_1 ⋯_xx_t)
is the t^th substring conjugate (or substring rotation) of x
(so ℛ_j(x) = x).
Since Lemma <ref> holds for unrestricted strings xy and yz (given y is nonempty) then it certainly holds for specified types of substrings, so in the context of substring conjugates we get:
Lemma <ref> holds for substrings expressed in V-form.
Thus a natural generalization of circ-UMFF is the substring circ-UMFF, where a conjugate is selected from the conjugacy class of substrings of a string rather than the usual rotation of letters.
An UMFF 𝒲 over Σ^+ is a substring circ-UMFF if and only if
it contains exactly one substring rotation of every primitive string x∈Σ^+ expressed in V-form.
A string x =
_xx_1 _xx_2 ⋯x_j-1_xx_j in V-form
over an ordered alphabet Σ, with maximal letter _x, is said to be a
𝒯_lex-word
if it is the unique minimum in 𝒯_lex-order
in its conjugacy class.
To clarify, consider the primitive integer string 431412 where, with reference to V-form, = 4, and letting ≪ denote co-lexorder (lexorder of reversed strings), then the conjugate 412431 is least in co-lexorder for the letter-based conjugates, while the substring conjugate 431412 is least in Lex-Ext co-lexorder in the comparison of 431412 and 412431. That is, 412431 is a 𝒯_≪-word, while 431412 is a 𝒯_lex-word.
We have then the following important result:
Suppose ≪ is a 𝒯-order over Σ^*.
(i) The class of border-free
𝒯_≪-words forms a circ-UMFF 𝒯
over the conjugacy class of letters.
(ii) The class of border-free
𝒯_lex-words
forms a substring circ-UMFF
over the conjugacy class of substrings.
By Definition <ref>, ≪ is a total order.
Here we make use of two fundamental results: the xyz lemma (Lemma <ref> from <cit.>) and the circ-UMFF theorem (Theorem <ref> from <cit.>).
Part (i). Let 𝒯 denote the set of border-free 𝒯_≪-words (Definition <ref>) over Σ.
First, by the definition of 𝒯, every letter in Σ is in 𝒯, so we may confine our consideration to strings of non-unit length.
Suppose then that xy and yz, with x,y,z nonempty, are both border-free 𝒯_≪-words in 𝒯, therefore primitive.
Consider the string xyz and suppose that it is a repetition u^k, k>1.
But if u is a proper prefix of xy then xy is bordered, whereas if xy is a prefix of u then yz is bordered. Thus we conclude that xyz is also primitive, and so
must have at least one border-free conjugate —
for instance, the conjugate
that is a Lyndon word is border-free and therefore might be
in 𝒯. So we next
consider which border-free conjugate c_𝒯 of xyz is in 𝒯.
So suppose that xyz is not itself minimal in 𝒯-order in its conjugacy class.
Let x = x_1 x_2 … x_r, y = y_1 y_2 … y_s and z = z_1 z_2 … z_t, r,s,t ≥ 1. First assume that a conjugate c=x_c+1… x_r yz x_1 … x_c,
1 ≤ c ≤ r,
is minimal, thus border-free and in 𝒯. But then applying Lemma <ref> to c and xy implies that the bordered word x_c+1… x_r yzxy is in 𝒯, an impossibility. So assume that a conjugate c'=y_d+1… y_s zx y_1 … y_d,
1 ≤ d ≤ s is minimal and belongs to 𝒯. Again applying Lemma <ref> to yz and c'
implies that the bordered word
yzxy_1 … y_d is in 𝒯, again impossible. Finally, for
c”=z_e+1… z_t xy z_1 … z_e, e+1 ≥ 1,
a similar argument for c” and yz implies that the bordered word z_e+1… z_t xyz is in 𝒯.
Hence the primitive conjugate xyz must itself be border-free and so must be the one, c_𝒯, in 𝒯,
that is least in 𝒯-order ≪ in its conjugacy class. Applying the sufficiency in Lemma <ref>, since xy, yz and xyz with nonempty y all belong to 𝒯, we can conclude that 𝒯 is
an UMFF.
Thus, from each conjugacy class of a primitive string, we have shown how to select a border-free word for 𝒯, therefore satisfying Definition <ref>.
Since moreover circ-UMFFs are necessarily border-free
(Theorem <ref> Part (1)), we can conclude that 𝒯 is a circ-UMFF.
Part (ii). The proof here is similar to that of Part (i), substituting Definition <ref> of 𝒯_lex-words for
Definition <ref> of 𝒯_≪-words, and applying Observation <ref> for substrings.
Here we let 𝒯^𝓈 denote the set of border-free 𝒯_lex-words over Σ^*.
Then by the definition of 𝒯^𝓈, the substrings of unit length (“letters") are in 𝒯^𝓈, where
these strings of length one have the form ℒw_i where w_i∈Σ^* (so if w_i is the empty string we get the
substring ℒ). Then a non-unit length string w of length ℓ will have ℓ occurrences of ℒ in the form _ww_1_ww_2⋯w_ℓ-1_ww_ℓ.
Suppose then that xy and yz, with x,y,z nonempty, are both border-free 𝒯_lex-words in 𝒯^𝓈, therefore primitive, and as above we deduce that xyz is also primitive.
In the analysis of 𝒯^𝓈, for the existence of at least one border-free substring conjugate,
we observe that the V-order
circ-UMFF can be considered in the form of Definition <ref>. Then there exists a border-free
V-word in the substring conjugacy class. As above, we suppose that xyz is not minimal in 𝒯_lex-order and proceed to
consider which border-free conjugate c_𝒯^𝓈 of primitive xyz is in 𝒯^𝓈. Let x = _xx_1 _xx_2 ⋯x_r-1_xx_r, y = _yy_1 _yy_2 ⋯y_s-1_yy_s, z = _zz_1 _zz_2 ⋯z_t-1_zz_t, r,s,t ≥ 1. First assume that a conjugate
c = _xx_c+1…_xx_ryz_xx_1…_xx_c,
1 ≤ c ≤ r,
is minimal, thus border-free and in 𝒯^𝓈. But then applying
Observation <ref>
to
c and xy implies that the bordered word _xx_c+1…_xx_ryzxy is in 𝒯^𝓈, an impossibility.
The rest of the argument follows as in Part (i), first showing that xyz is the conjugate c_𝒯^𝓈 in 𝒯^𝓈, and finally establishing that 𝒯^𝓈 is a substring circ-UMFF.
Observe that Theorem <ref> shows that if a circ-UMFF or a substring circ-UMFF is defined using a total order (which is not necessary), then every element of the (substring) circ-UMFF is obtained using the same total order and no other ordering technique. Observe further that the proof does not depend on any particular method of totally ordering Σ^*; however, the method must be total for border-free (hence primitive) strings in Σ^*.
We illustrate concepts from Theorem <ref> with the following:
Consider the border-free integer string x = 3177412. Then the (unordered) conjugacy class of x is given by
3 1 7 7 4 1 2
2 3 1 7 7 4 1
1 2 3 1 7 7 4
4 1 2 3 1 7 7
7 4 1 2 3 1 7
7 7 4 1 2 3 1
1 7 7 4 1 2 3
The third conjugate, 1231774, is the Lyndon word as it is least in lexorder. The sixth conjugate, 7741231, is the V-word as it is least in V-order; it is also a co-lexorder word as it is least in co-lexorder; furthermore, it is least in relex order (reverse lexorder). The fifth conjugate, 7412317, is the second largest in lexorder but is a bordered word. And the seventh conjugate, 1774123, is least in alternating lexorder (indexing strings from 1, odd indexed letters are compared with < and even with >).
We illustrate Definition <ref> for Lex-Ext co-lexorder where we intertwine lexorder
with ordering substrings in co-lexorder:
This example establishes the Lex-Ext co-lexorder substring circ-UMFF. Given a string x∈Σ^* in V-form, x = x_0 _xx_1 _xx_2 ⋯x_j-1_xx_j,
with x_0 = ε,
the substring conjugates of x are compared using lex-extension where the x_i substrings will be compared in co-lexorder.
Since both lexorder and by isomorphism co-lexorder are total orders of Σ^*, any pair of distinct strings x, y∈Σ^* can be compared deterministically in Lex-Ext co-lexorder. So given a border-free, hence primitive, string x, we can uniquely choose a conjugate from the substring conjugacy class which is minimal in Lex-Ext co-lexorder (cf. Definition <ref>). Theorem
<ref> Part (ii) then applies where in this case the class of border-free
𝒯_lex-words is the set of Lex-Ext co-lexorder words.
For example, the integer string
9211912197194395119119111912 factors uniquely as (921191219719439511)(9119111912),
while into Lex-Ext lexorder words as (9211)(921951)(9119111912).
Clearly we can replace co-lexorder in this context with any method for totally ordering Σ^*, for instance relex order or
alternating lexicographic order. Note that, as shown for V-order <cit.>, co-lexorder string
concatenation and ordering are not necessarily the same — this phenomenon of circ-UMFFs is explored in <cit.>. For example, 321 is less than 54 in co-lexorder, while 54 is less than 321 in circ-UMFF order, because the concatenation 54321 is a co-lexorder word.
Naturally, we can modify the results of this section by defining other canonical forms of a string. For instance, rather than partitioning a string according to the maximal letter as in V-form, the partition could depend on the minimal letter, or indeed any well-defined substring pattern — such as a short palindromic motif in DNA sequences.
Implementing the FM-Index in V-order was considered in <cit.>, leading to V-order substring pattern matching using backward search — whereby computing only on the k conjugates starting with the greatest letter, essentially a substring circ-UMFF, reduced the BWT matrix to O(nk) space. Note, however, that to fully implement BWT-type pattern matching in V-order so as to handle all letters — rather than just V-letters which have only one maximal letter which occurs at the start of the substring —
remains an open problem <cit.>.
Hence, the substring circ-UMFF concept promises future optimization opportunities — in particular, related to indexing and pattern matching applications.
§ GALOIS WORDS
This final section applies many of the concepts
developed in Sections <ref>–<ref> to show that although — in contrast to circ-UMFF words and in particular to classic Lyndon words — a given string does not necessarily factor uniquely and maximally into Galois words, nevertheless Galois words may still belong to a set of words which does form a circ-UMFF.
Additionally, the continuing necessity of the border-free requirement, as in Theorem <ref>, is demonstrated. We first
describe Galois words as
introduced in <cit.>, based on a variant of lexorder, namely alternating lexicographic order, denoted ≺_alt — in which,
informally, positions within two given strings are compared in alternating < and > order. More precisely:
[Alternating lexorder ≺_alt (modified from <cit.>)]
Given distinct strings x = x_1 x_2 … x_s and y = y_1 … y_t,
1 ≤ s ≤ t:
(1) (x not a proper prefix of y) If i is the smallest index such that x_i ≠ y_i,
then x≺_alty iff
(a) i is odd and x_i < y_i or
(b) i is even and y_i < x_i.
Otherwise, y≺_altx.
(2) (x a proper prefix of y)
x≺_alty iff |x| is even.
Otherwise, y≺_altx.
An immediate consequence of Definition <ref>(2) is the following:
For any string x, if |x| is even then
(1) ∀y∈Σ^+, x≺_altxy; in particular,
(2) x^k ≺_altx^k+r, for k ≥ 1, r ≥ 1.
We first establish that alternating lexorder ≺_alt forms a total order over Σ^*, thus enabling the selection of a unique conjugate (such as the least) for Definition <ref> and as also required for Theorem
<ref>.
Alternating lexorder ≺_alt is a total order over Σ^*.
Consider distinct x, y, z∈Σ^+ with x = x_1x_2 … x_e, y = y_1y_2 … y_f, z = z_1z_2 … z_g.
Assume that x≺_alty≺_altz, but that z≺_altx.
Case 1. z is a proper prefix of x.
Since
z≺_altx, by Definition <ref>(2) |z| is even.
Subcase 1a. x is a proper prefix of y.
Thus z must be a proper prefix of y, and since |z| is even,
again by Definition <ref>(2)
z≺_alty, a contradiction.
Subcase 1b. x is not a proper prefix of y. Then there is a smallest index i such that x_i ≠ y_i. If i > |z|, then z is a proper prefix of y and the contradiction of Subcase 1a applies to z and y. So suppose i ≤ |z| and consider a corresponding relation R,
< or > for odd or even index i, respectively — then since z_i = x_i, so that x_i R y_i implies z_i R y_i, it follows that z≺_alty, a contradiction.
Case 2. z is not a proper prefix of x.
Subcase 2a. If x is a proper prefix of y, then z≺_alty, a contradiction.
Subcase 2b. If x is not a proper prefix of y, then there is a smallest index j such that x_j ≠ y_j and x_j R y_j (where R is as above based on odd/even index). Let i be the smallest index such that z_i ≠ x_i. If j<i, then z_j = x_j and x_j R y_j,
so that z≺_alty, a contradiction. If j>i, then
z_i R x_i = y_i
and z≺_alty, also a contradiction. If j=i, then
z_j R x_j R y_j and z≺_alty, again a contradiction.
We conclude that x≺_alty≺_altz
requires x≺_altz;
that is, the transitivity establishes that ≺_alt is a total order over Σ^+.
As usual, ε is the unique least string, and so ≺_alt forms a total order over Σ^*.
Since ≺_alt is a total order, it is a candidate for constructing a 𝒯_≪-word (Definition <ref>) for a new circ-UMFF, making use of Theorem <ref>.
We therefore consider a concept analogous to Lyndon and V-words based on alternating lexorder:
<cit.>
A Galois word is a nonempty primitive string that is minimum in alternating lexorder over its conjugacy class.
We observe that every element of Σ is a Galois word. Thus the set of Galois words forms an FF — see Section <ref>.
Since the rotations of primitive strings are distinct, therefore the Galois word of each conjugacy class is uniquely defined.
Examples of Galois words are: ab, aba, abb, abba, ababa, ababaa, ababba. Observe that these words are not necessarily border-free and can even be palindromic. Furthermore, unlike Lyndon words and V-words, Galois words don't exhibit the ordered shuffle property <cit.> where interleaving characters/V-letters generates a string in the given circ-UMFF; for example, interleaving the Galois words aba and abb, where aba ≺_alt abb, yields aabbab, which is not a Galois word — instead the conjugate abaabb is Galois. If we try to apply Lemma <ref> to the Galois words xy = ababa and yz = ab with y = a, we get xyz = ababab, a repetition, while Galois words are by definition primitive. Hence
Galois words do not form an UMFF, nor therefore a circ-UMFF.
Since Galois words can be bordered, this observation is consistent with Theorem <ref>, which allows a circ-UMFF only for classes of border-free words.
A related
observation in <cit.> shows that, although Theorem <ref> implies that the unique maximal Lyndon factorization
of a word has a least number of nonincreasing Lyndon factors, this is not necessarily the case for Galois words:
[Example 44 in <cit.>]
Let w be the repetition ababab, hence not a Galois word. Then w = (ab)(ab)(ab) is a nonincreasing factorization into Galois words. However, w also admits a shorter (and increasing) factorization into Galois words, namely w = (ababa)(b).
In this example, for w = (ababa)(b) we have ababa ≺_alt b, although, since ababab ∉𝒲, ababa is not less than b in 𝒲-order
(Definition <ref>) — that is, ababab is not a Galois word.
Note also that w = (ab)(ab)(ab) is the unique Lyndon factorization of w.
Still, Galois words have applications. The Alternating Burrows-Wheeler Transform (ABWT) is analogous to the BWT and applies alternating lexorder <cit.>. An algorithmic perspective on the ABWT is given in <cit.>, where a linear time and space algorithm, , determines for each primitive string its unique cyclic rotation that is a Galois
word, which in turn supports computing the ABWT in linear time. Furthermore, practical uses of the ABWT in settings such as data compression and
compressed data structures are demonstrated.
Clearly, like lexorder string comparison, alternating lexorder comparison of two strings can be achieved in linear time and space. It follows that efficient suffix array construction methods based on lexorder, such as <cit.>, can be readily modified to apply alternating lexorder. In the alternating lexorder suffix array of a Galois word with border b, given that a border of a Galois word must have odd length
<cit.>, then by Definition <ref>(2), a suffix starting with the prefix b will precede
the suffix b.
We now go on to explore new combinatorial properties of Galois words, starting with the Galois equivalent of the fundamental Lyndon result, Theorem <ref> — albeit with the caveat of possible repetitions, previously shown by the example ababa ≺_alt b. We let 𝒢 denote the set of Galois words on a given alphabet.
(<cit.>)
A Galois word w∈𝒢 is smaller than any of its proper suffixes with respect to ≺_alt order.
Let u,v∈𝒢, where u is not a proper prefix of v. Then, u≺_altv if and only if
(1) u≺_altvx, ∀x∈Σ^+;
(2) ux≺_altv, ∀x∈Σ^+;
(3) ux≺_altvx, ∀x∈Σ^+;
(4) u^k+1≺_altu^kv, for k ≥ 0.
NECESSITY. Suppose u≺_altv. Since u is not a proper prefix of v, therefore ∀x∈Σ^*, u is not a proper prefix of vx, and ux is not a proper prefix of v and vx — and u^k+1 is not a prefix of u^kv. Moreover, since u≺_altv and u is not a proper prefix of v, there is a smallest index i such that u_i ≠ v_i. Consider a corresponding relation R (< or > for odd or even index i, respectively) — then since u_i ≠ v_i, it follows that u_i R v_i implies (u)_i R (vx)_i, (ux)_i R (v)_i, (ux)_i R (vx)_i and (u^k+1)_i R (u^kv)_i. Therefore, by Definition <ref>(1) cases (1)–(4) hold.
SUFFICIENCY. Suppose cases (1)–(4) hold. Since u is not a proper prefix of v, by the same reasoning as above, we get (u)_i R (vx)_i, (ux)_i R (v)_i, (ux)_i R (vx)_i and (u^k+1)_i R (u^kv)_i which implies that u_i R v_i. Therefore, by Definition <ref>(1), u≺_altv.
Let u,v∈𝒢. If u≺_altv and u is a proper prefix of v, then
(1) ∀x∈Σ^+, u≺_altvx;
(2) ∀ k ≥ 0, u^k+1≺_altu^kv.
Suppose u≺_altv and u is a proper prefix of v. By Definition <ref>(2), |u| must be even. We write v = u^rv', such that r≥ 1 and v' is the largest suffix of v that does not contain u as its prefix. Then, u^kv = u^k+rv'. Clearly, u is a prefix of vx and u^kv. By hypothesis |u| is even. Hence, by Definition <ref>(2), cases (1) and (2) hold.
Suppose u, v∈
𝒢. If uv is primitive, then u≺_altv if and only if uv∈𝒢.
NECESSITY. Suppose that u≺_altv. Since uv is primitive, there exists a rotation of uv that is least in alternating lexorder (≺_alt). Let u = u_pu_s and v=v_pv_s, where u_p, u_s, v_p, v_s ≠ε. Then, we need to show
(1) uv≺_altu_svu_p;
(2) uv≺_altvu;
(3) uv≺_altv_suv_p.
(A) Suppose u is not a proper prefix of v. By Lemma <ref>, we have u≺_altu_s, and by Lemma <ref>(3) and (1), we have uv≺_altu_sv and uv≺_altu_svu_p, respectively. Therefore case (1) holds. From the hypothesis and by Lemma <ref> we have u≺_altv_s. By Lemma <ref>(2) and (1), we have uv≺_altv_s and uv≺_altv_suv_p, respectively, and so case (3) holds. Finally, from the hypothesis we have u≺_altv. By the application of Lemma <ref>(1) and (2), we have u≺_altvu and uv≺_altvu, respectively. Thus case (2) holds.
(B) Suppose u≺_altv, and u is a proper prefix of v.
By Lemma <ref>, we have u≺_altu_s. By Lemma <ref>(1) and (2) we get u≺_altu_svu_p and uv≺_altu_svu_p, respectively. Thus case (1) holds.
By hypothesis and Lemma <ref>, we get u≺_altv_s, and by Lemma <ref>(1) and (2) we get u≺_altv_suv_p and uv≺_altv_suv_p, respectively. Hence case (2) holds.
We write v = u^rv', such that r≥ 1 and v' is the largest suffix of v that does not contain u as its prefix. Since uv and vu are of the same length, neither can be a proper prefix of the other. By the hypothesis and Lemma <ref> u≺_altv' and u is not a proper prefix of v'. Then uv = u^k+1v' and vu = u^kv'u. By Lemma <ref>(4), we get u^k+1≺_altu^kv'. By further application of Lemma <ref>(1) and (2), we get u^k+1≺_altu^kv'u and u^k+1v' ≺_altu^kv'u, respectively. Thus uv≺_altvu and case (3) holds.
SUFFICIENCY. Suppose that primitive uv∈𝒢. Then u≠v, and uv is strictly least in its conjugacy class in ≺_alt order. Suppose v≺_altu. Then, by the proof of Necessity, we find that vu is a Galois word — a contradiction. Since u≠v and v⊀_altu, Theorem <ref> implies that u≺_altv.
Thus the theorem is proved.
We formalize the notion of concatenation order 𝒲 (Definition <ref>) “aligning” with
𝒯-order
(Definition <ref>):
Given a circ-UMFF 𝒲, a concatenation order <_𝒲
and a 𝒯-order
with order relation ≪,
we say that 𝒲 is 𝒯-order aligned
if, for given strings u, v∈Σ^*, u <_𝒲v and u≪v.
The classic example is the lexorder alignment of Lyndon words, as expressed in Theorem <ref>. On the other hand, this alignment doesn't hold for co-Lyndon words (the co-lexorder analog of Lyndon words where co-lexorder is lexorder of reversed strings):
ba is less than ca in co-lexorder, while baca is not a co-Lyndon word — instead, it is
the conjugate caba. A further example of non-alignment arises with V-order — see Lemma 3.16 in <cit.>. We have seen that Galois words do not form a circ-UMFF and, furthermore, Theorem <ref> shows that alternating lexorder ≺_alt is not in general aligned with Galois concatenation.
We now show that the set of border-free Galois words over any alphabet, denoted 𝒢^bf, yields a unique maximal factorization:
𝒢^bf forms an UMFF.
We will show that
the 𝒢^bf satisfies the 𝐱𝐲𝐳 Lemma <ref>.
Thus we suppose that xy,yz∈𝒢^bf for some nonempty y; it is then required to show that xyz∈𝒢^bf. We may assume also that both x and z are nonempty, for otherwise by Lemma <ref> the claim holds trivially.
We start by showing that xyz is least in ≺_alt order amongst its conjugates.
Write xyz = x_1 … x_r y_1 … y_s z_1 … z_t, r,s,t ≥ 1, and consider ordering the conjugates of xyz in ≺_alt order (Definition <ref>). Let 𝒞^≺_alt denote the conjugacy class of xyz, where the conjugates are ordered according to ≺_alt.
Since a Galois word is less than any of its proper suffixes in ≺_alt order
<cit.>, this property therefore holds for xy and yz. For the ordering of 𝒞^≺_alt, consider a conjugate c of xyz starting with a nonempty suffix s of x_2 … x_r y_1 … y_s. Since xy is assumed to be border-free, xy[1..|s|] ≠s and thus the comparison between xyz and c is determined based on Definition <ref>(1). Therefore
the conjugate of xyz starting with prefix xy, namely xyz, comes before any conjugate in 𝒞^≺_alt starting with a suffix of x_2 … x_r y_1 … y_s
and in particular starting with y.
That is, if xy= ps for nonempty p, s, then xy≺_alts implies that xyz≺_alts≺_altszp.
Likewise, the conjugate of xyz starting with prefix yz, namely yzx, comes before any conjugate in 𝒞^≺_alt starting with a suffix of y_2 … y_s z_1 … z_t, and as before we can assume this is determined using Definition <ref>(1).
We conclude that xyz precedes all its conjugates in 𝒞^≺_alt, as required.
Suppose now that xyz is bordered with border b so that xyz = bub for u∈Σ^*. Then b is both a prefix of xy and a suffix of yz. Since distinct xy, yz∈𝒢^bf, both are border-free, and so xy cannot have suffix b nor can yz have prefix b. We can then write xyz= bub = b_1b_2 … b_r u_1u_2 … u_s b_1b_2 … b_r, r,s ≥ 1. If we suppose that |xy| > r+s, then xy is bordered; similarly, if |yz| > s+r, then yz is bordered.
So y = u_i … u_j, 1 ≤ i, j ≤ s.
The smallest and first conjugate in 𝒞^≺_alt is xyz = x u_i … u_j z = b u_1 … u_s b, and since the ordering of 𝒞^≺_alt applies
(Definition <ref>(1)), common prefixes occur concurrently in 𝒞^≺_alt; in particular, if there are n_b occurrences of b in xyz, they will occur as prefixes of the first n_b conjugates in 𝒞^≺_alt.
One of the conjugates in 𝒞^≺_alt is yzx, where yzx = u_i … u_s bx and xyz≺_altyzx. However, since b is a prefix of the first conjugate xyz, this contradicts that yz = u_i … u_s b is less than its proper suffix b in ≺_alt order. We conclude that xyz is border-free, and therefore primitive, which completes the proof.
We illustrate Lemma <ref>:
Let xy, yz∈𝒢^bf, where xy= abababbb and yz= ababbbbb and y= ababbb then, applying Lemma <ref>, xyz = abababbbbb with xyz∈𝒢^bf.
§.§ Binary Border-free Galois words
We next consider the structure of binary border-free Galois words, denoted 𝒢^bbf, where we assume that
Σ = {a, b} with a<b. The first few such words are a, b, ab, abb.
Suppose w∈𝒢^bbf over Σ = {a, b}, where a < b, with n = |w| ≥ 3. Then w has prefix ab and suffix bb.
By definition w
is primitive and so contains
both letters in Σ.
Also by definition, w is minimum over its rotations,
and so starts with a; since it is border-free, it must therefore end with b.
The claim clearly holds for w = ab^k, k ≥ 2,
as all non-trivial rotations of w start with b making w least in ≺_alt order.
So suppose that w = avb where v∈Σ^+:
* If w[n-1] = a, then the suffix is ab. Let u denote the conjugate of w such that
u = w[n-1]w[n]w[1..n-2]. Since w is least in ≺_alt order, and the comparison of w[2] and u[2] is based on the relation ≥, therefore w[2] cannot be a, thus
yielding prefix ab and so a bordered word. Hence w[n-1] = b and the suffix is bb, as required.
* Since w≠ ab^k, k ≥ 3, therefore v
contains an a. Let a_i be the a in v with largest i, so that 1< i <n-1, and since the string w must end with bb we know that the (i+1)^ th letter is b. Let
u' = w[i]w[i+1]..w[n]w[1 .. i-1]. Since w is least in ≺_alt order, therefore as above w[2] cannot be a,
thus yielding prefix ab.
We conclude that w has prefix ab and suffix bb, as required.
We next consider the general form of a binary border-free Galois word w, |w| >3. By Lemma <ref>, if w∈𝒢^bbf, w begins with ab. Trivially if w = ab^h, h>1, then w∈𝒢^bbf. So suppose that w≠ ab^h, thus containing at least two a's — since any conjugate of w starting with b is consistent with w∈𝒢^bbf, we restrict analysis to conjugates starting with a.
Consider a run in w of the form a^k, k>1. Then since w has prefix ab, any conjugate of w starting with a^j, 2 ≤ j ≤ k, is consistent with w∈𝒢^bbf. So we consider conjugates c_ab of w starting with ab and note the index d of the first difference between w and c_ab — since w is primitive, w and c_ab are distinct.
We say a substring u of w is an ab-Galois word if u has prefix ab and contains no other occurrence of ab. Therefore u has the form a b^e a^f, e ≥ 1, f ≥ 0. Then u is a Galois word (not necessarily border-free): clearly u is primitive, and it is less in alternating lexorder ≺_alt than any conjugate of u starting with b and likewise less than any conjugate starting with aa, if such exists.
Given a primitive binary string
w,
let ℱ be the factorization of w into ab-Galois words u_1,u_2⋯u_t, such that ℱ = (u_1) (u_2) ⋯ (u_t), t ≥ 1.
If |u_1| = |u_2| = ⋯ = |u_t|, and w is a Lyndon word under 𝒯_lex-order (Definition <ref>) with ≺_alt order, then ℱ∈𝒢^bbf.
Trivially this holds for any conjugates starting with aa or b. For conjugates starting with ab, we first observe that Theorem <ref> shows that alternating lexorder ≺_alt is a candidate for 𝒯-order (Definition <ref>). Then the result is an immediate consequence of maintaining Lyndon properties for conjugates starting with ab-Galois words when generalising lexorder to lex-extension order. In particular, w is border-free.
For the next more general result, we require a refinement of Definition <ref>(2). We define modified alternating lexorder ≺_modalt as follows:
Let x, y∈Σ^+ be distinct nonempty strings, where x is a proper prefix of y. Let S be the largest set of strings in Σ^+ such that for every z∈ S, xz≺_alty, and neither xz is a prefix of y nor is y a prefix of xz. Then, if S ≠∅, x≺_modalty; otherwise, y≺_modaltx.
Modified alternating lexorder ≺_modalt
is a total order over Σ^*.
Consider distinct strings x, y, t∈Σ^+. Assume that x≺_modalty and y≺_modaltt. We need to show that x≺_modaltt.
By assumption and Definition <ref>, x is a nonempty proper prefix of y and y is a nonempty proper prefix of t. Hence we write y=xy', where y' ≠ε, and t = yt', where t' ≠ε. Then, by substitution, we get t= xy't'. Also, by assumption and Definition <ref>, we may suppose S, S' ⊂Σ^+ are nonempty sets satisfying the following conditions:
* xz≺_alty, where for every z∈ S, neither xz is a prefix of y nor is y a prefix of xz.
* yz' ≺_altt, where for every z' ∈ S', neither yz' is a prefix of t nor is t a prefix of yz'.
Since xz is not a proper prefix of y, by Lemma <ref>(1) and (2), we get xz≺_altyz' and xzz”≺_altyz', where z' ∈ S', z”∈Σ^+, respectively. Then, by Theorem <ref>, we get xzz”≺_altt. Since xz is not a prefix of y nor y a prefix of xz, and since y is a prefix of t, we conclude that xz and t are not prefixes of each other, nor are xzz” and t prefixes of each other. Let S” denote the set containing all strings of the form zz”. Clearly, S”≠∅. Therefore, by Definition <ref>, we conclude that x≺_modaltt.
This completes the proof.
We will also apply the concept of a Hybrid Lyndon word, whereby a string x in V-form with x_0 = ε is a Hybrid Lyndon if and only if it is Lyndon under lexicographic extension <cit.>.
Given a primitive binary string w, with |w| > 3, let ℱ be the factorization of w into ab-Galois words u_1,u_2⋯u_t, such that ℱ = (u_1) (u_2) ⋯ (u_t), t ≥ 1.
Then w∈𝒢^bbf if and only if ℱ forms a Hybrid Lyndon word using 𝒯_lex-order (Definition <ref>) with ≺_modalt order.
NECESSITY. Suppose w∈𝒢^bbf with |w| > 3. From Claim <ref>, w starts with ab, namely an ab-Galois word, which is consistent with w being less in ≺_alt order than any conjugate c of w starting aa or b.
Hence we restrict our analysis to conjugates starting with ab, that is ab-Galois words (factors of ℱ). Consider the comparison of w = (u_1) (u_2) ⋯ (u_t) = w[1..t] and the i^th substring conjugate
c_i of w where both have prefix ab, so that c_i = (u_i+1) (u_t) (u_1) ⋯ (u_i) = c_i[1..t]. Since w is primitive, w and c_i are distinct, and as w∈𝒢^bbf then w≺_altc_i which is decided according to Definition <ref>. Let d be the index of the first distinct ab-Galois words between w and c_i, that is
w[1..d-1] = c_i[1..d-1], and let h = min {|w[d]|, |c_i[d]|}.
Then according to Definition <ref>, there are two cases to consider:
Definition <ref>(1). In this case the two factors differ at an index q ≤ h, that is w[g+q] ≠c_i[g+q], where g = ∑_j=1^d-1|u_j|. Since w≺_altc_i, we have w[g+q] R c_i[g+q], where R = < if g+q is odd and R = > if g+q is even. This holds for any conjugate satisfying Definition <ref>(1) and thus this case is consistent with ℱ forming a Hybrid Lyndon word using 𝒯_lex-order (Definition <ref>) with ≺_modalt order (which is in fact regular ≺_alt order here).
Definition <ref>(2) (Notation as in part (1).) In this case, with w[d] ≠c_i[d], one factor is a proper prefix of the other factor. Since w∈𝒢^bbf and w≺_altc_i, if w[d] is a proper prefix of c_i[d], then |u_d| = h < |c_i[d]|, whereby d<t. Then applying ≺_modalt order,
w[d+1] has prefix ab while, by definition of an ab-Galois word,
c_i[d] has only one occurance of ab which occurs at its prefix. So the comparison which determines w≺_altc_i is between w[d+1] and c_i[d], in particular between an a and b or between ab and aa with the appropriate relations (< / >). Similarly, if c_i[d] is a proper prefix of w[d], then ≺_modalt order applies as appropriate to ensure w≺_altc_i.
This ordering holds for any conjugate satisfying Definition <ref>(2) and thus this case is consistent with ℱ forming a Hybrid Lyndon word using 𝒯_lex-order (Definition <ref>) with ≺_modalt order.
SUFFICIENCY. Suppose the Hybrid Lyndon condition in the statement holds for a primitive binary string w. Clearly, w is less than any conjugate c of w starting aa or b with respect to ≺_alt order. So consider any conjugate c starting with an ab-Galois word. Accordingly, the Hybrid Lyndon constructed from 𝒯_lex-order (Definition <ref>) with ≺_modalt order guarantees that w≺_altc for all conjugates starting with an ab-Galois word. Hence all forms of conjugates are considered. Furthermore, from Lyndon properties, w is border-free. Thus, w∈𝒢^bbf.
Let w = abbbabbabbbbb be a binary primitive string, so that ℱ = (abbb) (abb) (abbbbb) with t = 3. Then, for instance, w = (abbb) (abb) (abbbbb) ≺_modalt (abb) (abbbbb) (abbb) since (abbb) ≺_modalt (abb) (abbbbb),
and since (abbb)(abb) ≺_modalt (abbbbb), w = (abbb) (abb) (abbbbb) ≺_modalt (abbbbb) (abbb) (abb).
In considering whether the set 𝒢^bbf forms an UMFF, as in Lemma <ref>, let u = abbbbb and v = ababbb, where u,v∈𝒢^bbf. But observe that uv is bordered with border abbb, indicating that 𝒢^bbf may not form an UMFF. On the other hand, applying Lemma <ref> with xy = v and yz = u, where y = abbb, yields xyz = ababbbbb, which does belong to 𝒢^bbf indicating 𝒢^bbf may form an UMFF.
§ UMFF TO CIRC-UMFF CONSTRUCTION
In this final section we propose a method for circ-UMFF construction that is based on
Theorem 4.1 in <cit.> — which states that any binary border-free UMFF can be enlarged to a binary circ-UMFF.
Our algorithm takes as input
a binary border-free UMFF 𝒲 of size n, where the words are given in increasing length, and outputs an enlargement of 𝒲 to a finite circ-UMFF which is necessarily border-free. We assume |𝒲| > 2; otherwise, the UMFF would contain only the alphabet. Thus we may choose a known circ-UMFF such as binary Lyndon words or binary V-words. The algorithm is in 3 stages:
* input the UMFF and record where word lengths change;
* generate all strings in Σ^* up to length 2^|w_n|+1 - 2, where |w_n| is the length of the longest word in 𝒲;
* generate the primitive border-free circ-UMFF words using w_3=[w_1 … w_m]; that is, [w_m w_1] is added to the circ-UMFF (assumed necessarily).
The subset of Σ^* is computed in (Algorithm <ref>) and all variables are assumed to be global.
Algorithm <ref> correctly computes a finite circ-UMFF from a finite binary border-free UMFF.
This follows from the method of the proof of Theorem 4.1 in <cit.> which states that any border-free binary UMFF can be enlarged to a binary circ-UMFF. All words in Kleene star up to a specified length are considered for the new circ-UMFF and each candidate word is checked to be primitive — all as required by Definition <ref>.
The rationale of Algorithm <ref> is as follows.
Given, for instance, the border-free UMFF 𝒢^bf over Σ{a,b}, we consider extending a subset of 𝒢^bf to a new circ-UMFF which is necessarily border-free.
For example, suppose abababbb ∈𝒢^bf, and consider the primitive string w = ababba, where w∈{𝒢∖𝒢^bf}; that is, w is a bordered Galois word. According to Definition <ref>, we need to choose one border-free conjugate from the conjugacy class of w — Lyndon words show that a border-free conjugate of a primitive string always exists, so there will be at least one border-free conjugate to choose from. For example, applying Theorem <ref>(5), we select the border-free conjugate aababb of w, where a, ababb ∈𝒢^bf, and note that the bordered word ababba is the Galois conjugate of w while aababb happens to be the Lyndon conjugate of w. Note further that, with reference to Lemma <ref>, abababbb and aababb are mutually border-free and so we cannot generate another word for the new circ-UMFF from them by applying Lemma <ref>.
Algorithm <ref> generates a finite circ-UMFF but can be easily modified to construct an infinite circ-UMFF.
We illustrate the algorithm by extending the border-free UMFF in Lemma <ref> to a circ-UMFF:
Given an UMFF 𝒱 over Σ = {a,b} consisting of a subset of border-free Galois words, we enlarge 𝒱 to the circ-UMFF 𝒲 for words up to length 5; 𝒱 and 𝒲 are specified below where the Galois words in 𝒲 are underlined:
* 𝒱 = {a, b, abb, ababb}
* 𝒲 = {a, b, ab, aab, abb, aaab, aabb, abbb, aaaab, aaabb,aabab, aabbb, ababb,
abbbb}
For instance, string x = b b ababb aaabb a b aabbb a a factors uniquely and maximally:
* over 𝒱, x =(b) (b) (ababb) (a) (a) (abb) (a) (b) (a) (abb) (b) (a) (a)
* over 𝒲, x = (b) (b) (ababb) (aaabb) (ab) (aabbb) (a) (a)
* but not necessarily uniquely over 𝒢, x = (b) (b) (ababb aaabb abaabbba a)
We believe this is the first example of a unique factorization family which is based on more than one type of ordering methodology.
Regarding factoring strings over circ-UMFFs which are derived from mixed methods, such as 𝒲 in Example <ref>, this can be achieved efficiently using the generic algorithmic framework given in <cit.>.
§ CONCLUDING COMMENTS
The concept of circ-UMFFs for uniquely factoring strings is a generalization of Lyndon words, which are known to be border-free string conjugates. The literature includes instances of circ-UMFFs for both regular and indeterminate (degenerate) strings. In this paper we have extended current knowledge on circ-UMFF theory including a further generalization to substring circ-UMFFs. Known instances of circ-UMFFs have been defined using a total order over Σ^*, such as V-order for arbitrary alphabets generating V-words, binary B-order generating B-words, and lex-extension order generating indeterminate Lyndon words.
We establish here that given any total ordering methodology 𝒯 over Σ^*, and a subset of Σ^* consisting of border-free conjugates minimal in 𝒯-order, the subset defines a circ-UMFF. An analogous result is established for substring circ-UMFFs. The border-free requirement is illustrated using Galois words by showing that they do not necessarily yield unique maximal string factorization — Galois words are thus worthy of deeper investigation in this context.
We have also delved further into the relationship between Lyndon and V-words, in particular showing that
there are infinitely more V-words than Lyndon words. Other novel concepts are illustrated throughout.
1cm
§.§ Open Problems
*
Non-unique factorization of a string is not necessarily a bad thing, since having multiple factorizations of a given string x, such as with Example 44 in <cit.>, opens avenues for choice. Accordingly, we propose future UMFF-based research into optimization type problems for string factoring over a fixed alphabet Σ, such as finding the factorization of x with the least/greatest number of factors. Related research on controlling Lyndon factors and influencing the run number of the Burrows-Wheeler transform by manipulating the order of Σ has been investigated in <cit.>.
* Generalize Algorithm <ref> for enlarging a border-free UMFF to a circ-UMFF from binary to general alphabet.
Observe that Example <ref> on a ternary alphabet shows that two conjugates from the same conjugacy class — namely abc and cab — can belong to an UMFF 𝒲. So this 𝒲 cannot be extended to a circ-UMFF, which by Definition <ref> must contain exactly one conjugate from each conjugacy class. However, as the example shows, 𝒲 is not border-free.
* Application of distinct string ordering methods over Σ^* and associated circ-UMFFs has been studieded in the context of the Burrows-Wheeler transform and indexing techniques <cit.>. There is much scope for discovery of new UMFFs and circ-UMFFs, and subsequent exploration of new instances, both combinatorially and algorithmically, over a variety of alphabets.
* The novel entity of circ-UMFFs defined using mixed methods identified in this work is worthy of thorough investigation. For instance, with reference to Example <ref>, where 𝒲 is the union of Galois and Lyndon words, the following question arises:
given UMFFs 𝒳, 𝒴, both over Σ, and defined using ordering method Ω_𝒳, Ω_𝒴, respectively, where 𝒳 generates both bordered and border-free words, then for each primitive bordered word b in 𝒳, choose a border-free conjugate of b from 𝒴 for the new circ-UMFF. Since in Example <ref> Ω_𝒳 is alternating lexorder and Ω_𝒴 is lexorder, can Ω_𝒴 always be lexorder — since every conjugacy class of a primitive word contains a border-free Lyndon word?
§.§ Acknowledgements
Funding: The second and third authors were funded by the Natural Sciences & Engineering Research Council of Canada [Grant Numbers: RGPIN-2024-06915 and RGPIN-2024-05921, respectively].
Australasian J. Combinatorics
Australasian Workshop on Combinatorial Algs.
Annual Symp. Combinatorial Pattern Matching
Annual International Computing & Combinatorics Conference
IEEE Symp. Found. Computer Science
Annual European Symp. on Algs.
Internat. Conf. on Language & Automata Theory & Applications
Internat. Workshop on Combinatorial Algs.
Australasian Workshop on Combinatorial Algs.
Internat. Symp. Theoretical Aspects of Computer Science
Internat. Colloq. Automata, Languages & Programming
Internat. J. Foundations of Computer Science
Internat. Symp. Algs. & Computation
String Processing & Inform. Retrieval Symp.
Scandinavian Workshop on Alg. Theory
Prague Stringology Conf.
International Workshop on Algorithms & Computation
Algorithmica
ACM Computing Surveys
Fundamenta Informaticae
Inform. Process. Lett.
Inform. Sciences
J. Assoc. Comput. Mach.
Commun. Assoc. Comput. Mach.
Math. in Computer Science
Nordic J. Comput.
SIAM J. Computing
SIAM J. Discrete Math.
J. Computational Biology
J. Algorithms
J. Combinatorial Maths. & Combinatorial Comput.
J. Discrete Algorithms
J. Automata, Languages & Combinatorics
ACM-SIAM Symp. Discrete Algs.
Software, Practice & Experience
The Computer Journal
Theoret. Comput. Sci.
splncs04
|
http://arxiv.org/abs/2409.02850v2 | 20240904162057 | Oops, I Sampled it Again: Reinterpreting Confidence Intervals in Few-Shot Learning | [
"Raphael Lafargue",
"Luke Smith",
"Franck Vermet",
"Mathias Löwe",
"Ian Reid",
"Vincent Gripon",
"Jack Valmadre"
] | cs.LG | [
"cs.LG",
"cs.AI",
"stat.ML",
"68T06",
"I.2; I.4; I.5; G.3"
] |
Vacuum Radiation Pressure Fluctuations on Electrons
L. H. Ford
September 9, 2024
====================================================
§ ABSTRACT
The predominant method for computing confidence intervals (CI) in few-shot learning (FSL) is based on sampling the tasks with replacement, i.e. allowing the same samples to appear in multiple tasks. This makes the CI misleading in that it takes into account the randomness of the sampler but not the data itself. To quantify the extent of this problem, we conduct a comparative analysis between CIs computed with and without replacement. These reveal a notable underestimation by the predominant method. This observation calls for a reevaluation of how we interpret confidence intervals and the resulting conclusions in FSL comparative studies. Our research demonstrates that the use of paired tests can partially address this issue. Additionally, we explore methods to further reduce the (size of the) CI by strategically sampling tasks of a specific size. We also introduce a new optimized benchmark, which can be accessed at <https://github.com/RafLaf/FSL-benchmark-again>.
§ INTRODUCTION
The recent surge of interest in few-shot learning (FSL), driven by its potential applications in many real-world scenarios, has led to a proliferation of new methods and novel experimental protocols <cit.>. If in conventional machine learning it is common to benchmark methods using a fixed split into training and validation sets, FSL presents unique challenges due to its reliance on extremely small and, consequently, biased training datasets. In fact, the performance of FSL can dramatically depend on the choice of the given labeled training samples <cit.>.
One question that FSL shares with conventional machine learning is that of the best performing methods.
Especially relevant to FSL, the high variance of measured performance based on the choice of labeled data has led practitioners to quickly adopt the standard of aggregating statistics over a large number of artificially generated tasks, stemming from a single (or a few distinct) dataset(s). The predominant approach is to generate artificial few-shot tasks by randomly sampling the same dataset with replacement, i.e. permitting the same samples to appear across multiple tasks. The outcome of these numerous tasks is the calculation of an average accuracy and its associated confidence interval (CI) for each method, thereby providing researchers with a statistically relevant basis for comparing the efficacy of different methods.
By allowing the same samples to appear in multiple tasks, the computed CIs account for the randomness of the sampler but not the data itself. In fact,
the computed CIs use the usual
Lindeberg-Lévy Central Limit Theorem (CLT). Hence, for
these CIs to be statistically valid, the underlying random variables must be independent and identically distributed (IID). This means that the currently reported CIs should be understood as a likely range of outcomes if the experiment were reproduced using exactly the same data, which we will refer to in the remainder of this paper as Closed CIs (CCIs). This contrasts with what is often of interest in many areas of machine learning: the range of outcomes if the experiment were repeated with data from the same underlying distribution, termed Open CIs (OCIs). Note that the latter could be obtained simply by sampling tasks without replacement, but at the cost of considerably restricting the number of different tasks one can generate for a given dataset. This limitation is even more severe if the dataset is small, resulting in potentially larger CIs and inconclusive comparisons between methods.
r0.55
0.55!
2cCLIP 2cDINO
NCC FT NCC FT
2*CLIP NCC
FT
2*DINO NCC
FT
Comparison between different methods for few-shot classification. Each entry consists of three elements: [With Replacement (Closed), Without Replacement (Open), Paired Tests (PT)]. Symbols and respectively indicate significant differences (positive and negative) between the row method and the column method while is non-conclusive. Results derive from the DTD test split (bottom left triangle) and Traffic Signs (top-right) split of MetaDataset, with task sampling at 5 shots, 5 ways, and 15 queries. Note the inversion in bold. NCC (Nearest Class Centroid), FT (Fine-tune) with CLIP and DINO as feature extractors.
The purpose of this paper is to highlight this crucial consideration when computing CIs. We propose strategies to address this issue and obtain meaningful comparisons while still accounting for the randomness of the data. These strategies rely on a) Paired Tests (PT), where methods are evaluated on the same set of generated tasks, and b) adequately sizing tasks. Throughout the paper, we focus on the specific case of few-shot classification in vision, the most popular area of research in the field of FSL.
Our investigation using Open Confidence Intervals (OCIs) can lead to conclusions that are inconsistent with those obtained using the classical approach in the field of few-shot learning. In particular, we find that some methods previously reported as statistically significantly outperforming others are actually indistinguishable when using OCIs, and vice versa. In addition, we show cases where the use of CCIs lead to (statistically significant) conclusions that are diametrically opposed to those obtained using PT. One such example is given in Table <ref>, where we compare different methods for few-shot classification depending on their used feature extractor (here CLIP <cit.> or DINO <cit.>) and their adapting methods (here Logistic Regression (LR), Nearest Class Centroid (NCC) or Fine-Tuning (FT)). In the table, we report three conclusions for each pair of model and method combinations: the first is obtained from the methodology described in <cit.> where the authors used the predominant way to compute CIs, that is CCIs; the second OCIs, and the third is based on PT. A conclusion that the row method is better than the column one is denoted with , means there is no conclusive statement, and that the column method is the most performing of the two. The upper triangular values in blue correspond to the Traffic Sign test split while the rest of the table (in red) refers to the DTD test split in the Metadataset benchmark by <cit.>. Particularly striking results are found on the Traffic Signs dataset. Indeed, when studied with replacement tests, CLIP with the NCC adapter underperforms DINO with Fine-tuning, yet this outcome is reversed in paired test assessments. This clearly demonstrates the importance of distinguishing between measurement methods, as failing to do so can lead to significant misinterpretations of results.
The main contributions of this work are:
* We highlight the importance of considering data replacement when computing CIs for FSL methods comparison. Our study illustrates the impact on CI ranges when transitioning from closed to open CIs on standard off-the-shelf vision datasets.
* By implementing paired evaluations, wherein multiple methods are compared on identically generated task sets, we demonstrate the ability to reach conclusive comparisons more frequently than when relying on simple performances with OCIs.
* We investigate how to optimize task generation from a given dataset, taking into account its size and number of classes, to reach small ranges of CIs and lead to more conclusive comparisons between methods. The result is a benchmark that can be used for few-shot classification of images.
§ CLOSED CIS VS. OPEN CIS
In this section, we are interested in better quantifying the difference between CCIs and OCIs. For this purpose, we define notations and outline algorithms for task sampling. We present a theoretical analysis of the differences between CCIs and OCIs. Finally, we empirically compare their ranges on real datasets.
§.§ A mathematical description of the problem
§.§.§ Standard Evaluation and Notations
The predominant method of evaluation in the field of few-shot classification is described in Algorithm <ref>. A few-shot classification task 𝒯 = (𝒦, 𝒮, 𝒬) comprises a set of classes 𝒦, a support set 𝒮 = {𝒮_c}_c ∈𝒦 and a query set 𝒬 = {𝒬_c}_c ∈𝒦 where 𝒮_c, 𝒬_c denote the sets of support and query examples for each class c ∈𝒦.
Let K = |𝒦| denote the number of ways (i.e. classes in a few-shot task), S = |𝒮_c| the number of shots per class, and Q = |𝒬_c| the number of queries per class (for simplicity, we assume the classes to be balanced).
Few-shot evaluation is typically performed by constructing many tasks from a larger evaluation dataset.
An evaluation dataset 𝒟 = (𝒞, 𝒳) comprises a set of classes 𝒞 and examples for all classes 𝒳 = {𝒳_c}_c ∈𝒞.
Let C = |𝒞| ≥ K denote the number of classes, and N = |𝒳_c| ≫ S + Q the number of examples per class.
As highlighted in the introduction, the rationale followed by the CI computation predominant method is to consider 𝒟 to be fixed and non-probabilistic.
0.9
The standard few-shot task sampler constructs T random tasks with K ways, S shots and Q queries from a dataset 𝒟 as outlined in Algorithm <ref>.
This procedure introduces additional random variables (besides the dataset itself) in the selection of classes and examples.
Let 𝒯_t = (𝒦_t, 𝒮_t, 𝒬_t) for t = 1, …, T denote the sampling of tasks.
The average accuracy on each task is obtained as:
A_t = 1/K Q∑_c ∈𝒦_t∑_x ∈𝒬_t c1[f_𝒮_t(x) = c],
where f is the evaluated model, conditioned by the support set.
Next, we turn to the actually reported metric which is the average accuracy over several tasks.
§.§.§ Computing Confidence Intervals
We compute the accuracy A_t of the method on each task, and then take the mean across tasks A̅ = 1/T (A_1 + ⋯ + A_T). As such, we obtain the following formula for the variance:
[A̅] = [A_1] + … + [A_T]/T^2 = [A]/T .
Assuming a sufficient sample size and that the mean
A̅ is normally distributed according to the Central Limit Theorem, the 95% confidence interval is obtained as A̅± 1.96 σ_A̅ using the formula for standard error (standard deviation of the sample mean):
CI = 1.96 σ_A̅ = 1.96 σ_A/√(T) .
Note that in the case of a very small number of tasks, Student's distributions can be used instead. Also, we took the example of 95% CIs, which is arbitrary but very common in the literature. For more generality, we consider a probability p_limit in the following for all theoretical considerations, and stick with the 95% value for experiments.
Note that, as the number of tasks becomes larger, it is inevitable that many tasks will re-use examples, since tasks are constructed independently with replacement.
Therefore, as T becomes large, the variance of the sample mean will approach the conditional variance [A̅|𝒟], and the confidence interval will represent the likely range of outcomes if we were to repeat the experiment with a random set of tasks on the same dataset. In other words, it provides no insight into how well a method would generalize on a distribution.
On the other hand, if T is chosen small enough, there will be minimal re-use of examples and the assumption of independence may approximately hold, although the confidence interval may be significantly larger.
We will now conduct an empirical evaluation of the disparity between closed (tasks sampled with replacement) and open (tasks sampled without replacement) CIs on real datasets.
§.§ Are OCIs larger than CCIs? An empirical study
In contrast to Algorithm <ref>, the task sampling without replacement is presented in Algorithm <ref> (see Appendix). Note that we make an explicit use of a Student's law estimator in this algorithm as it is expected that for some small datasets it can only generate but a few independent tasks.
In the latter, the total number of tasks T is determined directly by the sampling process, thanks to a specific stopping condition, based on the exhaustion of the dataset. This is done in an effort to minimize the obtained CIs' ranges. Since samples cannot be sampled twice, we can consider that the classes and examples are drawn IID from an underlying data distributions p(C) and p(X | C). This is why OCIs also account for the randomness of the data.
In our experiments, we utilize datasets from the Metadataset Benchmark as referenced in <cit.>. This benchmark comprises 10 datasets, out of which we employ 9, excluding Imagenet, to focus on cross-domain results in line with the recent trend in the literature <cit.>. These include Omniglot (handwritten characters), Aircraft, CUB (birds), DTD (textures), Fungi, VGG Flowers, Traffic Signs, Quickdraw (crowd-sourced drawings) and MSCOCO (common objects) <cit.>.
<cit.> details few-shot accuracies for 2000 tasks with 5-shots, 5 ways, and 15 queries in a comprehensive table covering various works on the Metadataset datasets. Our study's only difference lies in the adoption of the T=600 setting, a more prevalent choice in existing literature. If CCIs are found to be narrower than OCIs with this smaller T, it will be even starker with T=2000 tasks as shown in Equation <ref>. Our primary reference for methods and models is the comprehensive compilation provided by <cit.>, a foundational starting point for our experiments.
Our findings are detailed in Table <ref>, showcasing results across different few-shot methods and datasets. Firstly, there is a noticeable homogeneity in CCIs, arising from the fixed number of tasks set at T=600, which contrasts with the variability observed in OCIs. Interestingly, CCIs are substantialy narrower than OCIs for small datasets such as Aircraft and DTD. Conversely, in the case of larger datasets like Quickdraw, CCIs become larger than OCIs due to T=600 being insufficient to deplete the dataset. Indeed, Aircraft and DTD's test splits contain 1,500 and 840 samples respectively, whereas the test splits for MSCOCO and Quickdraw have much larger sizes of 152,000 and 7.7 million samples respectively. Across various datasets, models and methods, CCIs are on average 3.8 times larger than OCIs. These results highlight the imperative need for accurate interpretation of Confidence Intervals, given the dramatic differences between OCIs and CCIs ranges, that undoubtedly lead to disagreeing conclusions if misinterpreted.
We also notice that for cases where methods reach accuracies near 100%, like adaptation methods using CLIP (unlike those using DINO) on the CUB dataset, both types of CIs become narrower. This is due to accuracy saturation at 100%, which reduces the standard deviation of accuracies.
In the following, we delve into the conclusiveness of comparative studies using CCIs or OCIs.
§.§ Impact on Conclusiveness
First let us recall how confidence intervals are used to draw conclusions when comparing methods. Suppose we have two variables of interest x_1 and x_2, with their corresponding p_limit - confidence intervals (a generalized version of 95%-CIs) [x̅_1 - δ_1, x̅_1 + δ_1] and [x̅_2 - δ_2, x̅_2 + δ_2]. To draw conclusions about the fact x_1 is smaller than x_2, we proceed as follows: if the two intervals do not intersect, and x_1 + δ_1 < x̅_2 - δ_2, then:
P(x_1<x_2) > P(x_1 < x̅_1 + δ_1 x_2 > x̅_2 - δ_2 ) = (1-1 - p_limit/2)^2 > p_limit ,
where the (1-p_limit)/2 part comes from the symmetry of the Gaussian distribution.
r0.50
< g r a p h i c s >
Scatter plot of task accuracies using two different combinations of feature extractor/adaptation methods on the Traffic Signs benchmark.
With this in mind, the results listed in Table <ref> can lead to different conclusions depending on whether CCIs or OCIs are used.
For example, using the DTD dataset, CCIs lead to conclusive comparisons between both backbones and methods, whereas OCIs are inconclusive as they all intersect. Again, this should not be seen as a contradiction, but rather as having different paradigms about the comparisons. CCIs compare methods if they were reused on the same data, whereas OCIs focus on the underlying distribution. Next, we propose two methods to improve conclusiveness when comparing methods, namely paired tests in Section <ref> and task sizing in Section <ref>.
§ PAIRED TESTS
§.§ Definitions
As an effort to reduce the ranges of Open Confidence Intervals (OCIs), we propose to make use of paired tests.
Indeed, as we pointed out in the introduction, FSL tasks have a vast diversity in difficulty, leading to a high variance in accuracy across tasks. It is noteworthy that a task deemed hard for method A often aligns in difficulty for method B. This parallel in task difficulty across different methods was previously identified by <cit.>, and our findings are in agreement, as illustrated in Figure <ref>, where we plot the accuracies on tasks generated from the Traffic Sign dataset and using two different combinations of feature extractors and adaptation method. In the provided figure, a strong correlation, quantified at 0.675, is evident between two distinct methods. This highlights the potential for reducing the variance in accuracies resulting from task sampling by employing paired testing.
Formally, let us denote Δ_t = A_t - B_t the accuracy of the method on task t relative to a method with accuracy B_t, with Δ̅ = 1/T∑_t = 1^TΔ_t the mean difference across tasks.
While the mean of differences is simply the difference of the means, 𝔼[Δ̅] = 𝔼[A̅] - 𝔼[B̅], the variance of the difference may be significantly reduced when the accuracies are positively correlated:
[Δ̅]
= [ 1/T∑_t = 1^TΔ_t]
= 1/T[ A_t - B_t ]
since [X - Y] = [X] + [Y] - 2 (X, Y).
The lower variance of Δ_t compared to A_t results in a correspondingly smaller confidence interval, as detailed in Equation <ref>. Consequently, this leads to scenarios where two methods can exhibit significant differences when analyzed using paired testing despite there being no significant differences when directly comparing accuracies.
We performed experiments comparing various methods to fine-tuning (FT). Results are shown in Table <ref>, where each line corresponds to a specific dataset and feature extractor and each column to a combination of an adaptation method and a feature extractor. Two conclusions are depicted, the first being based on directly comparing accuracies, and the second using paired tests instead. Note that in all cases, we rely on OCIs, that is to say sampling without replacements. Let us also notice that paired tests never lead to contradictory conclusions with direct comparisons of accuracies, a direct consequence of previous remarks about the mean of differences. Their difference lies in the ability to conclude or not.
The fine-tuning adaptation method is selected as the baseline in Table <ref>, primarily due to its substantial computational cost. The cost of fine-tuning stems from the necessity to update the weights of the entire feature extractor for each considered task. The objective is to determine if it surpasses more cost-effective methods in performance. In comparative analyses excluding the comparison between CLIP and DINO and focusing only on the same feature extractor, it is frequently noted that fine-tuning either underperforms relative to other methods or the results are inconclusive. Indeed, fine-tuning significantly outperforms other methods in only 4 out of 36 cases, while underperforming in 14 cases. The outcomes in the remaining instances remain inconclusive. Consequently, we conclude that fine-tuning, in light of its considerable computational overhead, is not particularly advantageous to consider. These results should be nuanced by the dependency of fine-tuning on hyperparameters which makes any assertions about this method contingent upon a specific set of hyperparameters <cit.>.
For each of the nine considered datasets, we have a total of 135 unique comparisons, taking into account only distinct pairs of (model, methods) across two models and three methods. We found that 57 comparisons were conclusive using direct comparison with OCI while 94 were conclusive using paired tests. Figure <ref> in the Appendix illustrates this.
Table <ref> illustrates, on the DTD and Traffic signs dataset, that the three different approaches for computing CIs discussed so far result in varied assessments of the significance of differences between two methods. On all datasets, in 27 instances out of the 114 where the comparison with replacement was conclusive, that is ∼ 23% of such cases, a pattern emerges: a comparison initially classified as significantly different becomes non-conclusive under sampling without replacement, and then conclusive again when a paired test is applied. On 12 instances, ∼ 11% of previously considered cases, conclusiveness is not confirmed by the paired test.
Particularly striking is one instance of inversion, where a method previously deemed significantly more accurate than another was found to be significantly less accurate using a paired test. This reversal was observed in the comparison of Fine-tuning on DINO versus NCC with CLIP features on the Traffic Signs dataset. This implies that a method can significantly outperform another on a specific dataset, yet significantly underperform when evaluated across the entire distribution. The dataset is thus a particular instance of data that favors one method. This example powerfully exemplifies the need for clarity when interpreting CIs.
A natural question that arises from previous considerations is that of how to size tasks when performing sampling without replacement, aiming to reduce the range of obtained CIs. In that matter, we consider the size of the support set to be fixed at K S, leaving as the only free variable the number of queries per task and per class Q. Indeed, increasing the number of queries will inevitably reduce the total number of tasks we can construct, as shown in the following equation. Assuming a balanced dataset, we can estimate the number of tasks T that can be sampled by exhausting the full dataset:
T ≈⌊|𝒟|/|𝒯|⌋≈⌊CN/K(Q+S)⌋ ,
with |𝒯| is the number of samples, accounting for both the support and query sets, in each task.
§ SIZING TASKS TO NARROW OCIS
r0.45
< g r a p h i c s >
Variance of the average accuracy vs. the number of queries with synthetic data. The two classes are represented as 1D Gaussians 𝒩(-1, 1) and 𝒩(1, 1). The size of the dataset is N=1000 (500 samples per class). Tasks are sampled according to Algorithm <ref>. The number of shots is set to 5. We fit this with the model described in Equation <ref> and observe a strong fit of the model with our experiment.
If increasing Q reduces the number of tasks, it also changes √((A_t)) which is proportional to the CI. As such, there exists a trade-off between the number of queries and the feasible number of tasks that can be generated to minimize OCIs for any given dataset.
Intuitively, measuring A̅ with a small T (and consequently a high Q) results in extensive CI ranges, a phenomenon depicted in Equation <ref>. Conversely, measuring with Q=1 may generate many tasks (large T) with an extremely high variance because the accuracy per class becomes either 0% or 100%. In the following, we aim to identify the optimal number of queries, denoted Q^*, that effectively minimizes the variance of the average accuracy (A̅) and thus the obtained CIs. We first demonstrate mathematically the existence of such minimum by deriving (A̅).
We show in the Appendix that (A̅) can be written as:
( A̅) = K/NC (α Q+ β/Q + γ),
with α, β and γ also defined in the Appendix, and β >0.
These parameters are difficult to estimate in particular when dealing with real datasets and methods. If α≤ 0, then ( A̅) is decreasing as a function of Q since Q∈ℕ. In the following, we focus only on cases where α>0. This choice is supported by empirical evidence, which we will present later, indicating a U-shaped relationship between the variance of A̅ and Q for a certain range of S. Assuming this, ( A̅) reaches its minimum at Q^*=√(β/α). Next, we study what this entails as S and N vary.
§.§ Effect of S and N on Q^*
Given the definition of α and β obtained in Equation <ref>, we find that Q^* is an increasing function of S and a constant function of N. In this section, we show that these results are confirmed empirically on real datasets.
We propose to study the aforementioned variance model of A̅ with respect to Q in a simplistic 1D representation of samples. In our model, two class' distribution are represented as two Gaussians (𝒩_i = 𝒩(μ_i, σ_i) with i∈{1,2}). We then sample an artificial balanced dataset of fixed size N. Next, we sample tasks within this artificial dataset until exhaustion with the procedure described in Algorithm <ref> setting K=C=2. Using the NCC classifier we obtain a set of accuracies on which we can compute the average accuracy A̅. This procedure consisting of instantiating the synthetic dataset from Gaussians and measuring A̅ is iterated. This yields a set of {A̅_j }_j for a given set of parameters {S,Q,N,𝒩_1,𝒩_2} from which we compute an empirical variance (A̅).
In Figure <ref>, we show the measured (A̅) vs Q. We use 1000 samples in the datasets, split between the two classes. We set the number of shots to S=5. The model in Equation <ref> is fitted very precisely. The discretisation effect seen at high Q is due to the low number of tasks. Next, we study the effect of S and N and compare it to our experiments with synthetic data.
Increasing S shifts the curve's minimum from Q^* =1 towards Q^* → +∞ as depicted in Figure <ref>. This aligns with our model's predictions. At S=1, opting for Q^*=1 effectively has two effects: (a) the high variance of A_t due to small support and query sets increases the variance of A̅ and (b) low Q allows a significantly larger T, thus reducing the overall variance of A̅ as shown in Equation <ref>. Conversely, for S≥ 20, the setting boils down to classical transfer learning. Indeed, the narrowest CI is attained with one task with a large support and query set task.
Finally, we find a third regime where, for a range of values of S, Q^* is nontrivial. It corresponds to what is shown in the S=5 regime in Figure <ref>.
When increasing N, Q^* should not be affected according to Equation <ref>. However, we observe a hardly perceptible shift of Q^* in Figure <ref> when N is increased. We consider this effect to be sufficiently small and therefore negligible. Next, we explore how these results extend to real-world datasets.
§.§.§ Real Dataset Experiments
We now shift our focus to the findings derived from real image datasets. These datasets are often unbalanced and exhibit a variety of class numbers. Our objective is to determine whether the earlier conclusions remain valid in this context.
First, we observe that Q^* does not depend on the size of datasets. However the size of the dataset scales T which in turn scales CI_95%. Similar to the results with synthetic data, we observe a discretisation of the confidence interval in the high Q (low T) regime.
We also observe a similar phenomenon with different regimes in 1, 5 and 10 shots in Figure <ref>. Our analysis suggests that for tasks in the 1 shot regime, the best number of queries, Q, should be set to 1. For tasks with S=5 and S=10, the best values for Q are approximately Q=5 and Q=7, respectively. Figure <ref> also clearly shows that the predominent 15 queries is not optimal to narrow the OCI. We also show similar values for Q^* using DINOv2 instead of CLIP in the Appendix (Figure <ref>).
§ BENCHMARK PROPOSAL
Building on previously obtained results, we propose a simple benchmark where Paired Tests are used and the value of Q is chosen as the minimum found in the previous paragraph. Implicitly, we are assuming that the minimum of Δ's OCI is also reached at Q^*. More precisely, we are assuming the independence of the covariance with respect to Q. These assumptions are backed by the improved number of conclusive comparisons in Figure <ref> when optimizing Q and using paired tests. Indeed, while PT yieled 94 conclusive comparisions, PT with optimized Q yields a little more with 97 conclusive comparisons.
We present our results with the baselines adaptation methods previously studied in Table <ref> using DINOv2 as our baseline model. Our experiments consistently show that, for a given model, fine-tuning tends to be less effective than both logistic regression and the nearest class centroid methods. We also observe the choice of model is primordial with a clear advantage of DINOv2 over CLIP and DINO on most datasets. Our benchmark, including the code, seed values, task descriptions, and accuracy results, is available for use.
§ RELATED WORK
Few-Shot Learning Since the seminal works of <cit.> and <cit.>, the field of few-shot learning has known many contributions. Most solutions rely on the use of a pretrained feature extractor, trained on a generic abundant dataset <cit.>. The feature extractor can then be used as is on the target problem <cit.>, or adapted before classifying <cit.>. Most proposed methods differ in the way they combine the pretrained feature extractor and their adaptation to the target problem <cit.>. Over the years, multiple benchmarks have been proposed, including MiniImageNet <cit.>, Omniglot <cit.> and TieredImegenet <cit.>. If initially, these benchmarks focused on the in-domain case, where the feature extractor is trained on disjoint classes from the target problem, but all drawn from the same initial dataset, the trend evolved with the introduction of Meta-dataset <cit.> (MD) and later COOP <cit.>, where the feature extractor is trained on a large generic dataset and applied to various other domains, including fine-grain problems, embodying a cross-domain evaluation.
A few papers focus on the sampling of tasks of targeted difficulty for few-shot learning <cit.>. In <cit.>, the authors claim that model performance can be improved by sampling meta-learning tasks of increasing difficulty. In other works, failed meta-training tasks (deemed hard) are sampled again <cit.> or previously misclassified samples/classes are more likely to be sampled in following tasks <cit.> to focus the model on difficult tasks. Estimating task difficulty can itself be a difficult task, and several solutions have been proposed <cit.>.
The idea of difficulty-based sampling proposed in <cit.> is relevant to this paper since it enables the sampling of groups of tasks with homogeneous difficulty, effectively reducing the confidence interval ranges. In contrast, our research adopts paired tests, a method that obviates the need for such dependencies and provides a more universally applicable approach. Paired tests is not a novel contribution of our work. They were introduced over a century ago to study the evolution of small populations with time <cit.>. In these seminal studies, the authors shown that individual differences provide more statistical power and insights than changes in averages over the whole population.
Confidence Intervals Confidence intervals were established by Polish mathematician Jerzy Neyman <cit.> in the early 1930's coinciding with Fisher's ideas although supported by a different framework. They were more generally used and then required in medical research around the 1980's. They require assuming a specific model for the distribution of the considered data. In contrast, the bootstrap method, introduced in <cit.>, offers a distribution-agnostic approach for estimating ranges. We focused on traditional confidence intervals as they are better understood and more often implemented in the literature of few-shot learning, but similar conclusions could be drawn using Boostrap instead.
Challenges in Statistical Interpretation and Methodological Biases This issue of misleading CIs is not isolated to our domain. <cit.> have similarly criticized the common misinterpretations surrounding confidence intervals in broader scientific research. Moreover, the propensity to overlook or underreport negative or null (non-conclusive) results further exacerbates the problem of biased interpretations. <cit.> argues for the importance of acknowledging and analyzing negative results in computer vision. Lastly, the impact of dataset biases on the evaluation metrics has been well-documented by <cit.>. In their work, “Unbiased Look at Dataset Biases”, they identify and measure several biases such as selection, capture and negative set. It serves as a critical reminder of how dataset-based results can differ from those obtained in real-world distributions.
§ LIMITATIONS
A first limitation of our study we would like to point out is that for large datasets such as MSCOCO or QuickDraw, the predominant method of CI calculation leads to intervals that are actually larger than our proposed OCI. As such, on such large datasets using CCI may not be an unfair approximation.
Furthermore, our mathematical modeling only explains the origin of the minimum of CI with respect to Q but does not provide a way to find it analytically since we cannot easily estimate α, β and γ in Equation <ref>.
As mentioned at the end of paragraph <ref>, saturation at 100% accuracy may negatively impact the computation of confidence intervals and particularly the value of paired tests CI in Equation <ref>.
Finally, we point out that paired tests introduce complexity as they require a fixed seed and necessitate saving and publishing individual task accuracies when using the benchmark and comparing methods.
§ CONCLUSION
In our study, we demonstrated the stark contrast between Open and Closed measurements of method accuracy in Few-Shot Learning. Notably, OCIs take into account data randomness but are far wider than CCIs. We identified two major approaches that contribute to narrowing the OCIs and subsequently introduced a benchmark which uses these approaches. Our findings underscore the importance of using confidence intervals that account for data randomness in evaluations, a practice we advocate extending beyond classification and vision to encompass all domains employing task-based few-shot learning assessments.
style
§ MATHEMATICAL DERIVATION OF (A̅)
Suppose tasks are drawn IID without replacement, we write the variance of A̅ as:
( A̅) = 1/T_t( A_t ),
with A_t the accuracy for an arbitrary task t. By definition of the variance,
( A_t ) =𝔼[(A_t)^2]- (𝔼[A_t])^2.
For a given support set, the expected accuracy for some class c is denoted μ_t,c.
μ_t,c≜𝔼_𝒬_t(1[f_𝒮_t(x) = c] |𝒮_t ).
Then the expectation of A_t becomes
𝔼[A_t]=𝔼_𝒮_t[μ_t,c],
Along the same lines, we can derive 𝔼[(A_t)^2],
𝔼[(A_t)^2]=𝔼_𝒮_t𝔼_𝒬_t[(A_t)^2|𝒮_t].
For a fixed S_t, 1[f_𝒮_t(x) = c] and 1[f_𝒮_t(x') = c'] are not independent and their distribution will depend on the classes c and c'. Using Equation <ref>, we obtain .
𝔼[(A_t)^2]= 1/(KQ)^2∑_c ∑_c'∑_x ∑_x'𝔼_𝒮_t𝔼_𝒬_t[ 1[f_𝒮_t(x) = c] 1[f_𝒮_t(x') = c']].
We now separate cases where x=x' from cases where c=c' and finally cases where both are different.
𝔼[(A_t)^2]=1/KQ𝔼_𝒮_t[μ_t,c]+
Q-1/KQ𝔼_𝒮_t[(μ_t,c)^2]
+1/K^2∑_c ∑_c'≠ c𝔼_𝒮_t[μ_t,cμ_t,c'].
Using Equation <ref>, we find:
( A_t )=1/KQ𝔼_𝒮_t[μ_t,c]+Q-1/KQ𝔼_𝒮_t[(μ_t,c)^2]
+1/K^2∑_c ∑_c'≠ c𝔼_𝒮_t[μ_t,cμ_t,c']- (𝔼_𝒮_t[μ_t,c])^2.
Let us define some parameters,
m_1≜𝔼[A_t]=1/K∑_c𝔼_𝒮_t[μ_t,c],
m_2≜1/K∑_c𝔼_𝒮_t[(μ_t,c)^2],
and
m_3≜1/K^2∑_c ∑_c'≠ c𝔼_𝒮_t[μ_t,cμ_t,c'].
We get that
𝔼[(A_t)^2]=1/KQm_1+Q-1/KQ m_2
+m_3.
This gives
( A_t )=1/KQm_1+Q-1/KQ m_2
+m_3- (m_1)^2.
Then, using Equation <ref> (removing the rounding) and <ref>, we approximate:
( A̅) = K/NC (α Q+ β/Q + γ),
with α=m_2/K+m_3-(m_1)^2, β=S/K(m_1-m_2) and γ=m_1/K-m_2/K+S/K m_2+ S(m_3-(m_1)^2).
First, let us notice that β>0 since for μ∈ [0,1], μ^2<μ .
§ ALGORITHMS
Algorithm <ref> samples tasks with replacement and computes CCIs with Equation <ref> while Algorithm <ref> samples tasks without replacement and uses the student's distribution to compute OCIs.
0.9
§ ADDITIONAL RESULTS ON THE BENCHMARK
These results show the performance differences when taking DINO and CLIP with finetune as baselines. Again finetuning mostly underperforms other adaptation methods.
§ IS Q^* DEPENDENT ON THE MODEL USED?
We show in Figure <ref> that the same values of Q^* are found for DINO v2 instead of CLIP.
§ STATISTICS ON CONCLUSIVENESS
This histogram illustrates that paired test and the optimization of the size of tasks yields the maximum number of conclusive comparisons. Optimizing Q slightly improves the number of conclusive comparisons compared to simple paired tests.
§ ACKNOWLEDGMENT
This work was performed using HPC resources from GENCI–IDRIS (Grant 202123656B).
The PhD study of Luke Smith is supported by GSK-plc and the Australian Government Research Training Program (RTP) scholarship.
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044 –390685587, Mathematics Münster: Dynamics–Geometry–Structure.
F. Vermet conducted this work within the France 2030 framework programme, the Centre Henri Lebesgue ANR-11-LABX-0020-01
< g r a p h i c s >
Raphael Lafargue was supported by Brittany Region, France.
|
http://arxiv.org/abs/2409.02575v1 | 20240904095214 | Practical techniques for high precision measurements on near-term quantum hardware: a Case Study in Molecular Energy Estimation | [
"Keijo Korhonen",
"Hetta Vappula",
"Adam Glos",
"Marco Cattaneo",
"Zoltán Zimborás",
"Elsi-Mari Borrelli",
"Matteo A. C. Rossi",
"Guillermo García-Pérez",
"Daniel Cavalcanti"
] | quant-ph | [
"quant-ph",
"physics.chem-ph"
] |
||
‖‖
|
http://arxiv.org/abs/2409.02778v1 | 20240904145628 | Regularized Multi-output Gaussian Convolution Process with Domain Adaptation | [
"Wang Xinming",
"Wang Chao",
"Song Xuan",
"Kirby Levi",
"Wu Jianguo"
] | stat.ML | [
"stat.ML",
"cs.LG",
"stat.AP"
] |
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Shell et al.: Regularized Multi-output Gaussian Convolution Process with Domain Adaptation
§ ABSTRACT
Multi-output Gaussian process (MGP) has been attracting increasing attention as a transfer learning method to model
multiple outputs. Despite its high flexibility and generality, MGP still faces two critical challenges when applied to transfer
learning. The first one is negative transfer, which occurs when there exists no shared information among the outputs.
The second challenge is the input domain inconsistency, which is commonly studied in transfer learning yet not explored in MGP. In this paper, we propose a regularized MGP modeling framework with domain adaptation to overcome these challenges. More
specifically, a sparse covariance matrix of MGP is proposed by using convolution process, where penalization terms are added to
adaptively select the most informative outputs for knowledge transfer. To deal with the domain inconsistency, a domain adaptation
method is proposed by marginalizing inconsistent features and expanding missing features to align the input domains among different
outputs. Statistical properties of the proposed method are provided to guarantee the performance practically and asymptotically. The
proposed framework outperforms state-of-the-art benchmarks in comprehensive simulation studies and one real case study of a
ceramic manufacturing process. The results demonstrate the effectiveness of our method in dealing with both the negative transfer and the domain inconsistency.
Gaussian process, transfer learning, convolution process, domain adaptation.
Regularized Multi-output Gaussian Convolution Process with Domain Adaptation
Xinming Wang, Chao Wang, Xuan Song, Levi Kirby, Jianguo Wu
X. Wang and J. Wu are with the Department of Industrial Engineering and Management, Peking University, Beijing 100089, China.
E-mail: [email protected], [email protected].
C. Wang, X. Song and L. Kirby are with the Department of Industrial and Systems Engineering, the University of Iowa, Iowa City 52242, America.
E-mail: {chao-wang-2, xuan-song, levi-kirby}@uiowa.edu.
(Corresponding authors: Chao Wang and Jianguo Wu)
September 9, 2024
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
§ INTRODUCTION
Gaussian process regression (GPR) model has been gaining widespread applications in many fields, e.g., computer experiments, geostatistics, and robot inverse dynamics <cit.>.
As a powerful nonparametric method, it possesses many desirable and important properties, including excellent fitting capability for various functional relationships under some regularity conditions, providing not only predictions but also uncertainty quantification, and more importantly having closed-form expressions for both tasks.
The conventional GPR models are designed for single-output cases, i.e. the output is a scalar, which have been extensively studied in various applications <cit.>.
Recently, there has been a growing interest in extending GPR models to multiple outputs, which are ubiquitous nowadays.
A straightforward way to deal with multiple outputs is known as multi-kriging, which constructs models for each output independently <cit.>.
It is clear that the multi-kriging impairs the modeling of covariance among outputs, especially when there is strong evidence for the existence of such relationship resulting from physics or constraints.
Hence, multi-output Gaussian process, which can model correlation among outputs, has been attracting more attention as a joint prediction model.
The study of MGP begins in geostatistics community known as Co-Kriging in the past few decades <cit.>.
In today's machine learning society, it is usually known as a multi-task learning method <cit.>, which aims to learn all tasks/outputs simultaneously to achieve better model generalization.
For example, in the anomaly detection for a manufacturing process, the joint modeling of multiple closely related sensor signals using MGP can help detect the anomaly in each signal more efficiently <cit.>.
However, despite its wide applications, MGP-based multi-task learning requires that the data among these outputs are balanced, which might not be the case in practice.
For example, in <cit.>, jointly modeling two correlated pressure signals may not improve the anomaly detection efficiency for both signals if the samples from one signal are much less than the other.
Combining MGP and transfer learning is an effective way of handling such problems.
When the observed data for one output is rare or expensive to collect, it is needed to exploit useful information from other outputs whose data are abundant.
In this work, we focus on making predictions for one output which is denoted as target, by leveraging data in some related outputs which are denoted as sources.
For instance, in the robot inverse dynamics problem, the target is the torque at a joint when the robot is working with a new load, and the sources are the torques at the same joint when the robot works with other loads <cit.>.
The key to the MGP-based transfer learning is to extract and represent the underlying similarity among outputs and leverage information from source outputs to target output so as to improve the prediction accuracy <cit.>.
Specifically, this information transfer in MGP is achieved by constructing a positive semi-definite covariance matrix describing the correlation of data within and across the outputs <cit.>.
There are two categories of models for the covariance structure: separable models and non-separable models.
Separable models are most widely used approaches, including intrinsic coregionalization model (ICM) <cit.>, linear model of coregionalization (LMC) <cit.>, and their extensions.
These models use the Kronecker products of a coregionalization matrix and a covariance matrix of single GP to represent the covariance matrix of MGP.
It is clear that the separable models are not suitable for transfer learning since it restricts the same covariance structure (from single GP) for both the sources and the target.
On the other hand, the non-separable models overcome this limitation by using convolution process (CP) to construct the MGP and its covariance structure.
They build non-separable covariance function through a convolution operation and allow modeling each output with individual hyperparameters <cit.>.
This property makes them more flexible and superior to the separable models <cit.>.
Nevertheless, there are two critical issues to be considered when apply non-separable MGP to transfer learning.
The first issue is negative transfer, which occurs when the assumption of 'existence of shared information' breaks, i.e., learning source outputs will have negative impacts on the learning of target output. <cit.>.
The root cause of this issue is the excessive inclusion of data into the learning process, which is an increasingly severe issue in the big data environment. In such conditions, only a portion of the source data is correlated with the target data and it is desired to select the sources that yield the best transfer <cit.>.
A recent work <cit.> proposed a two-stage strategy to alleviate the negative transfer in MGP.
In this method, the first step is to train a two-output Gaussian process model between each source and the target.
In the second step, the inverse of predictive standard deviation of each two-output model is adopted as the index to evaluate the negative transfer of each source and integrate the results of transfer learning.
However, the two-stage approach raises significant concerns of losing global information as it only measures the pairwise transferability.
<cit.> establishes a mixed-effect MGP model which has the ability to infer the behavior of the target output when it is highly similar with some sources.
However, this method cannot guarantee the optimal selection of related sources, which will be demonstrated in our case study.
The second critical issue is that the input domains of the source processes might be inconsistent with that of the target process.
For example, in multilingual text categorization, data in different languages have different features and we can't directly combine them to train a classifier for the target data <cit.>.
Another example is shown in our case study, where the goal is to conduct transfer learning for predicting product density between dry pressing process and additive manufacturing process.
It is clear that two different manufacturing processes will have different process parameters (inputs) that contribute to the product density, e.g., the dry pressing process is dominated by temperature and pressure while the additive manufacturing process is influenced by solids loading percent and temperature <cit.>.
The two processes share one common process parameter (temperature), yet they also have distinct process parameters, which makes the transfer learning of product density (output) a non-trivial task.
Indeed, the input domain inconsistency is a common issue in transfer learning, and domain adaptation is usually used to overcome this issue. The basic idea of domain adaptation is to align the domains between source and target by transforming data into certain feature domain, and it mainly applies to classification methods such as logistic regression and support vector machine (SVM) <cit.>.
These domain adaptation methods aim to find feature mapping by minimizing sum of the training loss of learner and the difference among inconsistent domains, through solving a convex optimization problem.
However, the training loss of MGP is the negative log-likelihood function, which is a strongly non-convex function.
Therefore, a unique estimation of the parameters in feature mapping cannot be guaranteed.
More importantly, applying the existing domain adaptation methods directly to MGP might fail the transfer learning due to the existence of negative transfer, i.e., minimizing the difference of features between a negative source and the target will aggravate the severity of negative transfer. To the best of our knowledge, there is no research simultaneously handling issues of domain inconsistency and negative transfer in the context of MGP.
To overcome the above challenges, we propose a comprehensive regularized multi-output Gaussian convolution process (MGCP) modeling framework. In some literature <cit.>, MGP with CP-based covariance is also named as multi-output convolution Gaussian process (MCGP).
Our method focuses on mitigating negative transfer of knowledge while at the same time adapting inconsistent input domain.
In this work, we assume that there is at least one shared input feature between the sources and the target.
This assumption is also necessary to facilitate the transferability, i.e., there is nothing to transfer if all the inputs in the sources and the target are different.
Instead of learning all outputs equivalently, the proposed framework is based on a special CP structure that emphasizes the knowledge transfer from all source outputs to the target output, which features the unique characteristic of transfer learning and differentiates it from many existing multi-task learning methods.
The computation complexity is also significantly reduced from O((qn+n_t)^3) to O(qn^3+n_t^3) when modeling q sources with n data points in each and one target with n_t data points due to this special CP structure.
The major contributions of this work include:
* Building upon this special CP structure, a global regularization framework is proposed, which can penalize un-correlated source outputs so that the selection of informative source outputs and transfer learning can be conducted simultaneously.
* We provide some theoretical guarantees for our method, including the connection between penalizing parameters and selecting source outputs, and the asymptotic properties of the proposed framework.
* We propose to marginalize extra input features and expand missing input features in the source to align with the input domain of the target, so that the domain inconsistency can be solved.
Both the simulation studies and real case study demonstrate the effectiveness of our framework in selecting informative sources and transferring positive information even when the target is not quite similar to all the sources.
The remainder of this article is organized as follows.
The general multi-output Gaussian process and convolution process modeling framework are stated in Section <ref>.
In Section <ref>, a detailed description of our regularized MGCP modeling framework for transfer learning is presented, including some statistical properties and domain adaptation technique.
Section <ref> presents numerical studies to show the superiority of the proposed method using both simulated data and real manufacturing data.
The conclusion is given in Section <ref>. Technical proofs are relegated to the appendix.
§ PRELIMINARIES
§.§ Related works on MGCP
As mentioned above, several multi-output Gaussian convolutional process has been investigated for multi-task learning recently. To handle different kinds of outputs, e.g., continuous output and categorical output, <cit.> proposes a heterogeneous multi-output Gaussian process and conduct variational inference in training and forecasting. Considering that each output may have its unique feature which is not shared with other outputs, <cit.> constructs a MGCP model, where each output consists of two parts: one part is correlated with other output, while the remaining part is independent of others. Compared with these works, our method tackles the problem of inconsistent input domain rather than heterogeneous outputs, and focuses on selecting informative sources in one output (target) prediction.
Besides for multi-task learning, there are two works using MGCP for information transfer to one output <cit.>. Both of them pay no attention to the problem of inconsistent input domain, which limits the available source data for them. In addition, negative transfer is not explored in <cit.>, and the two-strategy method in <cit.> only realizes sub-optimal performance in reducing negative transfer. More detailed comparison can be found in Section 4.5.
Computational load is a severe limitation for multi-output Gaussian process when dealing with large amount of data. In addition to the popular sparse approximation method using inducing variables <cit.>, <cit.> assumes that all q outputs lie in a low-dimensional linear subspace, which can be represented by q̃≪ q orthogonal basis process, and the computational complexity can be reduced to O(q̃ n^3). <cit.> proposes an approach based on local GP experts, which partitions the input and output space into segments to train local experts, then combines them to form a model on full space. These techniques can be applied to our method when extending it to a big data environment. In this paper, we focus more on the effectiveness of our method in reducing negative transfer and handling inconsistent inputs.
Furthermore, convolutional-kernel-based Gaussian process has been applied to high-dimensional and structural data, e.g., image, graph and point cloud data <cit.>.
In these works, discrete convolution operation is applied on patch of pixels to construct covariance between two data samples. This type of Gaussian process can be applied to image or 3D mesh classification. However, in most of MGCP methods, including our proposed, the convolution operation is continuous and applied on latent processes.
§.§ Multi-output Gaussian Process
In this subsection, we will review some basic theories of Multi-output Gaussian process.
Consider a set of q source outputs f_i : 𝒳↦ℝ, i=1,...,q and one target output f_t : 𝒳↦ℝ, where 𝒳 is an input domain applied to all outputs.
The q+1 outputs jointly follow some multi-output Gaussian process as
(f_1,f_2,...,f_q,f_t)^T ∼𝒢𝒫(0, 𝒦(x,x^')),
where the covariance matrix 𝒦(x,x^') is defined as
{𝒦(x,x^') }_ij= cov_ij^f (x,x^')= cov( f_i(x), f_j(x^') ),
i,j ∈ℐ= {1,2,...,q,t} and x,x^'∈𝒳. Let ℐ^S={1,2,...,q} denote the index set of source outputs. The element {𝒦(x,x^') }_ij corresponds to the dependency between f_i(x) and f_j(x^').
Assume that the observation at point x is
y_i(x)=f_i(x)+ϵ_i, i ∈ℐ,
where ϵ_i ∼𝒩(0,σ_i^2) is independent and identically distributed (i.i.d) Gaussian noise assigned to the ith output.
Denote the observed data for the ith output as 𝒟_i={X_i, y_i}, where X_i=(x_i,1,...,x_i,n_i), y_i=(y_i,1,...,y_i,n_i)^T are the collections of input points and associated observations, and n_i is the number of observations for the ith output.
Suppose that N=∑_i ∈ℐn_i.
Let 𝒟^S={𝒟_i | i∈ℐ^S} denote the observed data of q source outputs and 𝒟={𝒟^S,𝒟_t} denote all data.
Define the matrix X and vector y for all input points and observations as
X=(X_1,X_2,...,X_q,X_t ),
y=(y_1^T,y_2^T,...,y_q^T,y_t^T )^T.
Since GP is a stochastic process wherein any finite number of random variables have a joint Gaussian distribution, for any new input point x_* associated with the target output f_t, the joint distribution of all observations y and the target function value f_t^*=f_t(x_*) is
[ y; f_t^* ]∼𝒩[ [ 0; 0 ],
[ K(X,X)+Σ K (X, x_*); K (X, x_*)^T cov_tt^f(x_*, x_*) ] ],
where K(X,X) ∈ℝ^N × N is a block partitioned covariance matrix whose i,jth block, K_i,j∈ℝ^n_i × n_j, represents the covariance matrix between the output i and output j; Σ is a block diagonal noise covariance matrix with Σ_i,j=σ_i^2I_n_i if i=j and 0 otherwise; K(X, x_*)=( K_1,*^T, K_2,*^T,..., K_q,*^T,K_t,*^T )^T and K_i,*=( cov_i,t^f (x_i,1,x_*), cov_i,t^f (x_i,2,x_*),..., cov_i,t^f (x_i,n_i,x_*) )^T.
To simplify the notations, we introduce a compact form that K=K(X,X), K_*=K (X, x_*) and C=K+Σ.
Based on the multivariate normal theory, the posterior distribution of f_t(x_*) given data {X,y} can be derived as
f_t(x_*)| X,y∼𝒩( μ(x_*), V_f(x_*) ),
where the predictive mean μ(x_*) and variance V_f(x_*) can be expressed as
μ(x_*) =K_*^T C^-1y,
V_f(x_*) = cov_tt^f(x_*, x_*)-K _*^T C^-1K_*.
It can be seen that the mean prediction eq:mean prediction is a linear combination of the observations y , while the variance prediction eq:variance prediction does not depend on y.
The first term in variance, cov_tt^f(x_*, x_*), is the prior covariance while the second term is the variance reduction due to the mean prediction.
For the predictive variance of target observation at x_*, we can simply add the noise variance σ_t^2 to that of f_t(x_*).
Equation (<ref>) implies that the key feature of multi-output Gaussian process is borrowing strength from a sample of q source outputs {f_1,f_2,...f_q} to predict the target output f_t more precisely.
This effect is achieved by combining the observed source outputs and target output in a linear form wherein the weight is characterized by covariance matrix C and K_*.
We would like to mention again that the key assumption for the desired function of multi-output Gaussian process is that the source outputs and the target output are correlated, and this correlation can be represented by C and K_*.
§.§ Convolution Process
From previous studies <cit.>, it is known that the convolution of a Gaussian process and a smoothing kernel is also a Gaussian process.
Based on this property, we can construct a non-separable generative model which builds valid covariance function for MGP by convolving a base process Z(x) with a kernel g(x).
More precisely, as shown in fig:convolution process, for output i ∈ℐ, f_i(x) can be expressed as
f_i(x)=g_i(x)∗ Z (x)=∫_-∞^∞g_i (x-u) Z (u) du,
where ∗ denotes a convolution operation, g_i(x) is the output-dependent kernel function and Z (x) is the shared process across all outputs f_i(x), i ∈ℐ.
We assume that Z(x) is a commonly used white Gaussian noise process, i.e., cov(Z(x), Z(x^'))=δ(x-x^') and 𝔼(Z(x))=0, where δ(·) is the Dirac delta function. Note that f_i(x) is also zero-mean GP, thus the cross covariance can be derived as
cov_ij^f (x, x^') = cov{ g_i(x)∗ Z (x), g_j(x^')∗ Z (x^')}
=∫_-∞^∞ g_i(u)g_j(u-v)d u,
where v=x-x^'. The calculation detail is in Appendix <ref>
Equation (<ref>) implies that the correlation between f_i(x) and f_j(x^') is dependent on the difference x-x^' and the hyperparameters in kernels g_i and g_j when they are constructed by a common process.
Specially, if we use the Dirac delta function δ(x) as the smoothing kernel, i.e., g_i(x)=a_i δ(x), the convolution process will degenerate to the LMC model with single shared latent process, i.e. f_i(x)=a_i Z(x) where a_i ∈ℝ is specific to each output i <cit.>.
So the convolution process can be considered as a dynamic version of LMC because of the smoothing kernel, which also illustrates the superiority of the non-separable MGP model.
More generally, we can combine the influence of multiple latent processes and extend eq:single convolution process to a more flexible version as
f_i(x)=∑_e=1^lg_ie(x)∗ Z_e (x),
where l is the number of different latent processes.
This expression can capture the shared and output-specific information by using a mixture of common and specific latent processes <cit.>.
§ MODEL DEVELOPMENT
The proposed framework presents a flexible alternative which can simultaneously reduce negative source information transfer and handle inconsistent input domain.
In Section <ref>, our regularized multi-output Gaussian process model is established using a convolution process under the assumption of consistent input domain.
The structure of our model enables the separate information sharing between the target and each source. More importantly, our regularized model can realize the selection of informative sources globally.
Section <ref> provides some statistical properties for the proposed model, including the consistency and sparsity of estimators.
Section <ref> presents the domain adaptation method to deal with the inconsistent input domain of sources.
In Section <ref>, we discuss the implementation of our model using Gaussian kernel and L_1 norm regularization.
§.§ Regularized MGCP modeling framework
In this and the following subsection, we focus on the circumstance that the source input domain is consistent with the target input domain. Note that we will relax this assumption in Section <ref>.
As described in Section <ref>, we are provided with q source outputs {f_i | i∈ℐ^S}, one target output f_t, and the observed data 𝒟={𝒟^S,𝒟_t}.
Under the framework of MGP, we use CP to construct the covariance functions as shown in eq:cov in convolution process. The structure of our model is illustrated in fig:structure.
With the aim of borrowing information from the source outputs to predict the target output more accurately, the latent process Z_i, i ∈ℐ^S and kernels g_ii,g_it serve as the information-sharing channel between the outputs f_t and f_i.
On the other hand, Z_i's are set independent of each other so that no information is shared among source outputs, which significantly reduces the computation complexity that will be analyzed later.
Considering the existence of target-specific behavior, for simplicity yet without loss of generality, a single latent process Z_t(x) is added to the construction of f_t.
Based on the structure illustrated in fig:structure, the observation of outputs can be expressed as
y_i(x) =f_i(x)+ϵ_i(x)=g_ii(x)∗ Z_i(x)+ϵ_i(x), i ∈ℐ^S
y_t(x) =f_t(x)+ϵ_t(x)=∑_j ∈ℐg_jt(x) ∗ Z_j(x)+ϵ_t(x),
where g_ii is the kernel connecting latent process Z_i and the output f_i, and g_it is the kernel connecting the latent process Z_i and the target output f_t. For the q source outputs, individual kernel for each source enables an accurate approximation for their feature.
For the target output f_t, its shared features with source outputs are encoded in Z_i and g_it, i∈ℐ^S, while its specific feature is encoded in Z_t and g_tt.
Based on the assumption that f(x) is independent with ϵ(x), the covariance between any two observations of the outputs i,j ∈ℐ can be decomposed as:
cov(y_i(x),y_j(x^'))
= cov(f_i(x),f_j(x^')) + cov(ϵ_i(x),ϵ_j(x^') ).
To keep the notational consistency, denote cov(y_i(x),y_j(x^')) as cov_ij^y(x,x^'). As ϵ_i(x), i∈ℐ are i.i.d Gaussian noises, the covariance of two observations y_i(x),y_j(x^') can be expressed as
cov_ij^y(x,x^')= cov_ij^f(x,x^')+σ_i^2 τ_ij(x-x^'), ∀ i,j ∈ℐ
where τ_ij(x-x^') is equal to 1 if i=j and x=x^', and 0 otherwise.
Note that every output is a zero-mean GP and {Z_i(x)|,i∈ℐ} are independent white Gaussian noise processes, so cov_ij^f (x,x^')=0 for i,j ∈ℐ^S and i ≠ j, i.e. the covariance across sources are set as zero. And the source-target covariance cov_it^f(x,x^') can be calculated as
cov_it^f(x,x^')
=∫_-∞^∞ g_ii(u)g_it(u-v)du, i ∈ℐ^S
where the last equality is based on eq:cov in convolution process, and v=x-x^'. The detailed calculation can be found in Appendix <ref>. In the same way, we can derive the auto-covariance as
cov_ii^f(x,x^') =∫_-∞^∞ g_ii(u)g_ii(u-v)du, i∈ℐ^S
cov_tt^f(x,x^') =∑_j ∈ℐ∫_-∞^∞ g_jj(u)g_jt(u-v)du.
Finally based on the above results, we can obtain the explicit expression of covariance matrix C=K(X,X)+Σ in eq:joint distribution as
C=
[ [ C_1,1 0 ⋯ 0 C_1,t; 0 C_2,2 ⋯ 0 C_2,t; ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 ⋯ C_q,q C_q,t; C_1,t^T C_2,t^T ⋯ C_q,t^T C_t,t ] ]:=
[ [ Ω_s,s Ω_s,t; Ω_s,t^T Ω_t,t , ] ]
where C_i,j=K_i,j+Σ_i,j∈ℝ^n_i × n_j consists of elements {C_i,j}_a,b= cov_ij^y(x_i,a, x_j,b), 1≤ a ≤ n_i, 1 ≤ b ≤ n_j.
Re-partition the covariance matrix into four blocks: Ω_s,s, a block diagonal matrix, representing the covariance of source outputs' data; Ω_s,t representing the cross covariance between the source outputs and the target output, which realizes the information transfer from sources to the target; Ω_t,t representing the covariance within the target output.
Regarding the structure shown in eq:covariance matrix, there are two interesting points worth of further discussion.
The first point is setting the covariance across the source outputs to zero, which is the result of the independency among {Z_i}_i=1^q. Ignoring the interactions among sources may cause some loss of prediction accuracy for the sources, especially when the amount of observed data n_i is small.
However, we aim to improve the prediction accuracy only for the target and assume the sample data for each source are sufficient, which guarantees the prediction performance of our method.
Another one is about the covariance between the target and each source, which reveals the advantage of our proposed framework in dealing with negative transfer.
The source-target covariance function in eq:cov_it illustrates that f_t can share information with each source through the kernels g_it and g_ii with different hyperparameters.
It can be intuitively understood that if g_it(x) is equal to 0, the covariance between f_i and f_t is also zero.
As a result, the prediction of f_t will not be influenced by f_i, i.e., no information transfer between them.
Further, we derive the following theorem which presents the ability of our model in reducing negative transfer.
theoremTheorem
Suppose that g_it(x)=0, ∀ i ∈𝒰⊆ℐ^S for all x∈𝒳.
For notational convenience, suppose 𝒰={1,2,...,h|h≤ q}, then the predictive distribution of the model at any new input x_* is unrelated with {f_1,f_2,...,f_h} and is reduced to:
p(y_t(x_*) | y)=𝒩( k_+^T C_+^-1y_+,
cov_tt^f(x_*,x_*)+σ_t^2-k_+^T C_+^-1k_+),
where k_+=(K_h+1,*^T,...,K_q,*^T,K_t,*^T)^T, y_+=(y_h+1^T,...,y_q^T,y_t^T)^T, and
C_+=
[ C_h+1,h+1 ⋯ 0 C_h+1,t; ⋮ ⋱ ⋮ ⋮; 0 ⋯ C_q,q C_q,t; C_h+1,t^T ⋯ C_q,t^T C_t,t ].
The proof is detailed in Appendix <ref>.
This theorem demonstrates one key property of our framework. If we penalize the smoothing kernels {g_it(x)}_i ∈𝒰 to zero, the MGCP is actually reduced to a marginalized version, which only contains source outputs {g_it(x)}_i ∈ℐ^S / 𝒰 and the target output.
The possible negative transfer between {f_i}_i ∈𝒰 and f_t can thus be avoided completely. This result is based on the fact that if g_it(x)=0,
cov_it^f (x)=∫_-∞^∞ g_ii(u)g_it(u-v)du=0,
and C_it=0.
To apply the idea in <ref> to model regularization, we denote that g_it(x)=θ_i0g̃_it(x), where θ_i0 satisfies the condition that g_it(x)=0, ∀x if and only if θ_i0=0.
Let θ be the collection of all parameters in the model and θ_0={θ_i0| i ∈ℐ^S}⊂θ .
Then, based on the results of <ref>, our regularized model can be derived as:
θ max L_ℙ(θ| y)
= L(θ| y)-ℙ_γ(θ_0)
= -1/2y^TC^-1y-1/2 log|C|
-N/2log(2π)-ℙ_γ(θ_0),
where L_ℙ(θ| y) denotes the regularized log-likelihood, L(θ| y) denotes the normal log-likelihood for Gaussian distribution, and ℙ_γ(θ_0) is a non-negative penalty function.
To make the smoothing kernel connecting target and uncorrelated source to 0, common choices of the regularization function include: L_1 norm ℙ_γ(θ_0)=γ |θ_0| and smoothly clipped absolute deviation (SCAD) function <cit.>.
The validity of our method is ensured by two claims.
Firstly, based on the theory of multivariate Gaussian distribution, if the source f_i is uncorrelated with the target f_t, then the corresponding covariance matrix block C_it should be zero.
Secondly, <ref> guarantees that by shrinking some elements of θ_0, {θ_i0}_i∈𝒰, to zero, {C_it}_i∈𝒰=0 and the target output can be predicted without the influence of the source outputs {f_i}_i ∈𝒰.
Another unique advantage of the proposed method is that it is a global regularized model, since the shrinkage of parameters are applied to all the sources simultaneously, which is different from the local regularization over a subset of data in <cit.>.
Besides the property of global regularization over all the sources, the computational complexity of our method in parameter optimization is greatly reduced because of the sparse covariance matrix.
Based on the partitioned covariance matrix
C=
[ Ω_s,s Ω_s,t; Ω_s,t^T Ω_t,t ] and using the inversion lemma of a partitioned matrix, the log-likelihood function can be decomposed as:
L(θ| y)= -1/2[ỹ^T Ω_s,s^-1ỹ+(Aỹ-y_t)^T B^-1(Aỹ-y_t) ]
-1/2[ log |Ω_s,s|+ log |B| ]-N/2log(2π),
where ỹ={y_1^T,...,y_q^T}^T, A=Ω_s,t^T Ω_s,s^-1, B= Ω_t,t-AΩ_s,t is the Schur complement.
The computational load of MLE is mainly on calculating the inverse of covariance matrix Ω_s,s and B. As Ω_s,s is a diagonal blocked matrix with q square matrix C_i,i∈ℝ^n × n, the complexity for Ω_s,s^-1 is O(qn^3).
As B∈ℝ^n_t× n_t, the complexity for B^-1 is O(n_t^3).
As a result, the complexity of our method is O(qn^3+n_t^3).
However, in the ordinary MGP methods <cit.>, C is a full matrix without zero blocks, so the complexity increases to O((qn)^3). Therefore, the whole computational complexity is O((qn)^3+n_t^3). The above complexity calculation still holds when some sources have different input domains with the target, which will be analyzed in Section <ref>.
§.§ Statistical properties for regularized MGCP
In Section <ref>, we have discussed that if the source output f_i is uncorrelated with the target output f_t, the cross-covariance between them should be zero, i.e. C_i,t=0.
On the other hand, if the kernel g_it(x)=0, then C_i,t=0 and thus the predictive distribution of f_t(x_*) is uncorrelated with the observations from the source output f_i according to <ref>.
Therefore, to avoid negative transfer, the estimated parameter θ̂_i0 should be zero, which can be realized through the regularized estimation.
In this subsection, we provide some asymptotic properties of the regularized maximum likelihood estimator θ̂.
Same as the last subsection, suppose there are q elements in θ_0, denoted by {θ_10,θ_20,...,θ_q0}, which correspond to the q smoothing kernels {g_1t,g_2t,...,g_qt} respectively.
Denote the true parameter values of θ_0 and θ in eq:regularized log-likelihood as θ_0^* and θ^*. Suppose there are h zero elements in θ_0^*.
Regarding to the penalty function, we assume that ℙ_γ(θ_0) ≥ 0, ∀θ_0; ℙ_γ(0) = 0; ℙ_λ(θ_0^') ≥ℙ_λ(θ_0) if |θ_0^'| ≥ |θ_0|.
These typical assumptions are easily satisfied by the previously mentioned penalty functions.
Before discussing the statistical properties of the regularized model eq:regularized log-likelihood, we first need to introduce the consistency of the maximum log-likelihood estimator (MLE), θ̂_#, for the unpenalized L(θ|y).
Note that the observations of Gaussian process are dependent.
So based on some regularity conditions for stochastic process, it has been proved that θ̂_# asymptotically converges to θ^* with rate r_N s.t. r_N →∞ as N→∞, i.e.,
θ̂_#-θ^* = O_P(r_N^-1).
For more details of the regular conditions and consistency proof, please refer to Appendix <ref> and the chapter 7 in <cit.>.
We first discuss the consistency of the MLE for the regularized log-likelihood L_ℙ(θ| y).
Suppose that the MLE for L(θ|y), θ̂_#, is r_N consistent, i.e., satisfying non-penalized maximum log-likelihood estimator consistency. If max{ |ℙ^''_γ(θ_i0^*)|: θ_i0^* ≠ 0}→ 0, then there exists a local maximizer θ̂ of L_ℙ(θ|y) s.t. θ̂-θ^*=O_P(r_N^-1+r_0), where r_0=max{ |ℙ^'_γ(θ_i0^*)|: θ_i0^* ≠ 0}.
The proof is detailed in Appendix <ref>.
This theorem states that if the derivative of penalty function satisfies some conditions, the estimator of the regularized log-likelihood is also consistent.
If we take a proper sequence of γ for the penalty, for example choose γ to make r_0 satisfies r_0=o_P(r_N^-1), θ̂ is also r_N consistent as θ̂_#.
The condition in this theorem, max{ |ℙ^''_γ(θ_i0)|: θ_i0≠ 0}→ 0, is easily satisfied for common regularization functions.
For example, if ℙ_γ(θ_i0)=γ|θ_i0|, then |ℙ^''_γ(θ_i0)|=0 satisfies.
Besides the consistency, another key property of θ̂ is that it possess the sparsity, which is provided in <ref> as follows.
Let θ_10^* and θ_20^* contain the zero and non-zero components in θ_0^* respectively.
Assume the conditions in <ref> also hold, and θ̂ is r_N consistent by choosing proper γ in ℙ_γ(θ_0). If
N →∞liminf θ→ 0^+liminf γ^-1ℙ^'_γ(θ) >0 and (r_N γ)^-1→ 0,
then
N →∞limP ( θ̂_10=0)=1.
The proof is detailed in Appendix <ref>.
This theorem implies that by choosing proper penalty functions and tuning parameters, the regularized MGCP model can realize variable selection, i.e. the estimator θ̂ can perform as well as if θ_10=0 is known in advance. More importantly, in our model, the variable selection of θ means the selection of informative source based on <ref>.
The conditions in this theorem can also be satisfied easily. Again taking the example ℙ_γ(θ_i0)=γ|θ_i0|, if we let γ=r_N^-1/2, then N →∞liminf θ→ 0^+liminf γ^-1ℙ^'_γ(θ)=1 and (r_Nγ)^-1=r_N^-1/2→ 0, which satisfies the conditions.
§.§ Domain adaptation through marginalization and expansion (DAME)
The discussions in Section <ref> and <ref> are based on the assumption that the target and source data share the same input domain.
However, as we emphasized in the introduction, domain inconsistency is a common issue in transfer learning.
In this subsection, we propose an effective domain adaptation method for dealing with the domain inconsistency in our MGCP model.
The general assumption for the proposed domain adaptation method is that there is at least one commonly shared input feature between each source and the target.
But we do not require that all sources share the same input, i.e., different sources can share different dimensions with the target.
The basic idea of our domain adaptation method is to first marginalize extra features in the sources, then expand missing features to align with the target input domain.
More specifically, our method aims to find the marginal distribution of the source data in the shared input domain with the target, then create a pseudo dataset in the target input domain. Thus, we name the method as DAME.
This newly created pseudo dataset will be in the same input domain as the target data and have the same marginal distribution as the original source data, which can be used as the new source data to plug in the proposed MGCP model.
Figure <ref> shows the adaptation procedure using the normalized density data of ceramic product, where the source input domain contains two features x^(c) and x^(s), and the target input domain contains features x^(c) and x^(t).
In fig:domain adaptation 1, the source data are marginalized to the domain which only has feature x^(c), and a marginal distribution is obtained based on the marginalized data.
In fig:domain adaptation 2, several data are induced according to the marginal distribution, then we expand them to get the pseudo data which have the same features with the target data.
To generalize the example in fig:domain adaptation, we slightly abuse the notation and focus on one source 𝒟_i={X_i,y_i}, i∈ℐ^S.
Note that the proposed method will be applied to every source that does not have consistent domain with the target. Let x^(c)∈ℝ^d_c denote the shared features in both the target and source input domain, x^(s)∈ℝ^d_i denote the unique features in the input domain of the ith source, and x^(t)∈ℝ^d_t denote the unique features in the target input domain.
Then, any source and target data can be expressed as
x_i,·=[ x_i,·^(c); x_i,·^(s) ],
x_t,·=[ x_t,·^(c); x_t,·^(t) ].
The first step is to marginalize the extra features. Define the shared input domain as 𝒳^P which is represented by x^(c), and a projection matrix
P=[ I_d_c 0_d_c × d_i ].
Then, we can get marginalized source data 𝒟_i^P={X_i^P, y_i }, where X_i^P=PX_i =(x^(c)_i,1,...,x^(c)_i,n_i).
The projected data usually get too dispersed in the shared domain, e.g., the blue triangle in fig:domain adaptation, and the dispersion will be recognized as large measurement noise of the data.
As a result, a smoothing method is needed to extract the overall trend of the marginalized data and generate induced data with smaller dispersion.
Many non-parametric methods are available for this purpose, such as kernel regression, B-spline, and GP model, etc.
In our work, kernel regression is chosen to model the marginalized data, and n_i^' samples {x_i^', a^(c), y_i^', a}_a=1^n_i^' are induced based on the trained model
y_i^', a=∑_b=1^n_iK_λ(x^(c)_i,b,x_i^', a^(c))y_i,b/∑_b=1^n_iK_λ(x^(c)_i,b,x_i^', a^(c)),
where K_λ is the kernel function and λ is the estimated hyperparameter through cross-validation.
Note we use i^' to denote a new (marginalized) source resulting from the original source i.
For example, the mean of marginal distribution is represented by the orange curve in fig:domain adaptation, and the induced data are represented by the orange triangle in fig:domain adaptation 2.
The second step of DAME is to expand the {x_i^', a^(c), y_i^', a}_a=1^n_i^' to include the unique features in the target domain, i.e. x^(t). To realize this idea, we expand the marginalized data along the dimension of x^(t) by adding i.i.d noise which simulates the measurement error.
For example, if x^(t) is one-dimensional and the observed target data have lower bound x_ low^(t) (e.g., x_ low^(t)=2 in fig:domain adaptation 2) and upper bound x_ up^(t) (e.g., x_ low^(t)=8 in fig:domain adaptation 2), choose n_i^'' values, {x_ i^'',b^(t)}_b=1^n_i^'', spaced in [x_ low^(t), x_ up^(t)].
The pseudo data of the source i can be expressed as:
𝒟_i^ new={[ x_i^', a^(c); x_i^'', b^(t) ], y_i^'', a, b| a∈ [1, n_i^'], b ∈ [1, n_i^''] },
where y_i^'',a, b=y_i^',a+ϵ_a,b and ϵ_a,b is an i.i.d Gaussian noise. For the standard deviation of ϵ_a,b, we can choose the estimated noise deviation of target data using a single-output GP.
In fig:domain adaptation 2, we take n_i^''=4 and the pseudo data is represented by the orange dots.
This expanding approach is intuitively practicable as no prior information about the missing features for source data is given.
If domain knowledge is available for the expansion step, e.g., there is a monotonically increasing trend along x^(t) in fig:domain adaptation 1, it is also straightforward to incorporate such information.
For the pseudo dataset, our DAME method preserves the marginal information of sources in the shared domain and does not introduce other information in the target-specific domain. This is the unique property of our DAME method. This method will benefit the target if the pseudo dataset is informative. On the other hand, even if the pseudo dataset provides negative information on the target prediction after the marginalization and expansion, our regularized model can identify it and exclude in the learning process. It is worth noting that the existing domain adaptation methods by minimizing the difference between the transformed source and target data might not be able to mitigate the potential negative transfer. Besides, finding an optimal feature mapping (like the existing methods) for the source and target within MGCP training would be extremely difficult, if not impossible.
In addition, it is worth analyzing the complexity of our method under the circumstance of inconsistent input domains. For the domain adaptation process, the main computational load is on applying the smooth method to obtain the marginal distribution. It will not exceed O(n^3) for each source whether we use kernel regression or GP model. Thus, the complexity of domain adaptation process is not larger than that of constructing the MGCP model in our method. Therefore, the computational complexity of our method is still O(qn^3+n_t^3) when domain inconsistency occurs.
§.§ Implemetation using Gaussian kernel and L_1 norm
In this subsection, we use Gaussian kernel to implement the modeling framework introduced in Section <ref>-<ref>.
Gaussian kernel is a very popular choice which is flexible for various spatial characteristics with a small number of hyperparameters. In order to obtain a neat closed form of the convolved covariance function, we take the smoothing kernel as:
g_ij(x)=α_ijπ^-d/4|Λ|^-1/4exp(-1/2x^T Λ^-1x) i,j ∈ℐ,
where α_ij is the scaling parameter and Λ is the diagonal matrix representing the length-scale for each input feature.
Using the domain adaptation introduced in Section <ref>, we can assume that every source i ∈ℐ^S is transformed to have the same input domain as the target.
By plugging the kernel eq:smooth kernel in eq:cov_it-eq:cov_tt we obtain
cov_it^f(x,x^')
= 2^d/2α_iiα_it|Λ_ii|^1/4 |Λ_it|^1/4/|Λ_ii+Λ_it|^1/2×
exp[-1/2(x-x^')^T (Λ_ii+Λ_it)^-1(x-x^') ],
cov_ii^f(x,x^')
= α_ii^2 exp[-1/4(x-x^')^T Λ_ii^-1(x-x^') ],
cov_tt^f(x,x^')
=∑_j ∈ℐα_jt^2 exp[-1/4(x-x^')^T Λ_jt^-1(x-x^')],
where i ∈ℐ^S. eq:cov collection shows that by using the kernel in eq:smooth kernel, the covariance functions are similar to the traditional Gaussian kernel, especially for the auto-covariance within each source.
Then, based on these results, in regularized log-likelihood of eq:regularized log-likelihood, the collection of all parameters is θ={α_ii, α_it, Λ_ii, Λ_it, σ_i | i∈ℐ}, and the sparsity parameters is θ_0={α_it| i∈ℐ^S}.
Moreover, if we take L_1 norm as the regularization function and consider the transformed source data, eq:regularized log-likelihood will become
θ max L_ℙ(θ| y^ new)= L(θ| y^ new)-γ∑_i=1^q|α_it|,
where y^ new={y^ nda,y^ da}, y^ nda includes the data from sources which do not conduct domain adaptation, y^ da includes the data from sources conducting domain adaptation; γ is the tuning parameter and has critical effect on the optima.
Typically, the choice of tuning parameter is made through a grid search with cross validation(CV) or generalized CV, such as leave-one-out (generalized) CV and 5-fold (generalized) CV.
The estimated parameter θ̂ are obtained through solving the problem in eq:regularized log-likelihood with L1.
However, there are two noticeable details in practice.
Firstly, as this optimization problem is not a convex problem and multiple local optima exist with high probability, we usually need to set random initial values for several times.
Secondly, as the commonly used gradient methods, such as L-BFGS method and conjugate gradient method, require the objective function to be smooth, they cannot be applied to solving eq:regularized log-likelihood with L1 because L_1 norm function is not smooth at zero point.
To solve this issue, we take a Huber smooth approximation as
γ∑_i=1^q|α_it| ≈γ∑_i=1^q{1/2ηα_it^2, |α_it|≤η
|α_it|-η/2, |α_it|> η.
where η is a small constant, e.g. 10^-4.
As the maximum bias between the approximation and original function is η, it brings little influence to the optima and makes the common gradient method applicable.
Finally for prediction, calculate C, K_* and cov_tt^f(x_*,x_*) in eq:mean prediction-eq:variance prediction with θ̂ at point x_*.
Then, the predictive distribution of f_t(x_*) is in the form of eq:predictive distribution.
The implementation of the regularized MGCP modeling in this work is summarized in Algorithm <ref>.
§.§ Unique Methodology Contribution
As mentioned in the introduction, the works of <cit.>,<cit.> also focus on predicting one output through multi-output Gaussian process. Although the covariance structure in <cit.> is similar to us, the work in <cit.> mainly focuses on realizing the transfer from multiple sources to one target. Moreover, works in <cit.> does not consider negative transfer and the corresponding theoretical guarantee on the regularization, which is the major focus in our work.
For the two-stage strategy in <cit.>, it conducts regularization pair-wisely between the target and each source, and combines each sub-model's prediction linearly with different weights. This way has two main drawbacks.
First, the correlation in one pair might be influenced by other sources. In other words, the strong correlation in one pair might be the results of other sources, and such strong correlation might disappear when considering all sources together.
Second, the integration of all pairs is conducted by the predictive variance, which is a sub-optimal way for both performance and interpretability.
Finally, compared with existing MGP models, this work is the first one considering inconsistent input domain problem. This technique can increase available source data for transfer learning and multi-task learning with MGP.
§ NUMERICAL STUDIES
We apply the proposed regularized multi-output Gaussian convolution process model, referred as MGCP-R, to two simulation cases and one real case. In Section <ref>, we introduce the general settings and benchmark methods for our numerical studies. Section <ref> demonstrates the advantages of our method in reducing negative transfer when the sources have the consistent input domain with the target. Section <ref> presents the effectiveness of our framework in dealing with the inconsistent source input domain. And in <ref>, we test and verify the performance with a moderate number of sources and input dimensions. Finally in Section <ref>, we apply the proposed modeling framework to the density prediction of ceramic product.
§.§ General settings
In this section, we discuss the general settings for assessing the benefits of MGCP-R using simulated data. To evaluate the performance in selecting informative sources and mitigating the negative transfer of knowledge, we randomly generate observations from q source outputs and 1 target output, in which only q_1 source outputs share information with the target output. For simplicity, the q source outputs have equal number of observations n_1=...=n_q=n, and the target output have less observations, i.e., n_t < n. These observations form the training set and n_test samples from the target output form the test set.
For comparison, we take four other reference methods as benchmarks:
* The non-regularized MGCP model, whose covariance structure is the same as the proposed model but without regularization, denoted as MGCP;
* A regularized MGCP model with a full covariance structure, i.e., constructing the covariance among sources, denoted as MGCP-RF;
* The two-stage method <cit.> denoted as BGCP-R, which first trains two-output GP models with regularization for each source and the target, then integrate the results of each sub-model in an empirical way;
* The single GP model constructed by a convolution process, in which only observations from the target output are used for training, denoted as GCP.
In MGCP-RF, the sources and target are modeled as follows:
y_i(x) =g_ii(x)∗ Z_i(x)+g_0i(x)∗ Z_0(x)+ϵ_i(x), i ∈ℐ^S
y_t(x) =∑_j ∈ℐg_jt(x) ∗ Z_j(x)+g_0t(x) ∗ Z_0(x)+ϵ_t(x),
where Z_0(x) is used for capturing shared information among sources, and Z_i(x) is for unique information in each source/target. This structure refers to <cit.>, but is tailored for transfer learning. To realize similar source-selection effect as MGCP-R, we penalize scale parameters both in g_0i(x) and g_it(x) as a group, i.e., ℙ_γ(θ_0)=γ∑_i=1^q √(α_0i^2+α_it^2). More details can be found in Appendix <ref>.
In BGCP-R, q regularized two-output GP models are trained using the data from each source and the target. The predictive distribution of each sub-model can be expressed as
f_t(x_*)| X_it, y_it∼𝒩(μ_i (x_*), V_if(x_*) ),
where X_it=(X_i,X_t), y_it=(y_i^T,y_t^T)^T, μ_i (x_*)=K_*^T(X_it,x_*)C(X_it,X_it)^-1y_it, V_if(x_*)= cov_tt^f(x_*,x_*)-K_*^T(X_it,x_*)C(X_it,X_it)^-1K_*(X_it,x_*). Then, the integrated results for BGCP-R is derived as
f_t(x_*)| X, y∼𝒩( ∑_i=1^qμ_i (x_*) V_if^-1(x_*)/∑_i=1^q V_if^-1(x_*) , q/∑_i=1^q V_if^-1(x_*) ),
which is an empirical combination of the predictions of each sub-model.
Gaussian kernel in eq:smooth kernel is used for all methods and L_1 norm is used as the regularization function in MGCP-R and BGCP-R.
Regarding the model parameter settings, scaling parameters {α_ii,α_it|i=1,...,q,t} and noise parameter σ for all outputs are initialized with random values in [0,1]. The length-scale diagonal matrix {Λ_ii,Λ_it|i=1,...,q,t} are also initialized with random values in [0,1]. Regarding the hyperparameter learning, we use L-BFGS method in GPflow <cit.>, which is a Python library based on TensorFlow, to maximize the log-likelihood. For the smoothing of L_1 norm regularization function, the value of parameter η in eq:smooth approximation is set to 10^-5.
Finally, to assess the prediction accuracy, we adopt the mean absolute error (MAE) criterion,
MAE=1/n_test∑_i=1^n_test| f_t(x_*,i)-f̂_t(x_*,i)|,
where f̂_t(x_*,i) is the predicted mean at x_*,i. We repeat each case for G=100 times and present the distribution of the four methods' MAE in a group of boxplots.
§.§ Simulation case 1
In order to assess the performance of different methods when the negative transfer of knowledge exists, i.e., learning some sources will bring negative influence on the learning of the target, we adopt an example with one-dimensional input.
The 1D example has q=4 source outputs defined in 𝒳_1=[0,5]:
f_1(x) =0.3(x-3)^3, f_2(x) =0.3x^2+2sin(2x),
f_3(x) =(x-2)^2, f_4(x) =(x-1)(x-2)(x-4) ,
and one target output:
f_t(x)=0.2(x-3)^3+0.15x^2+sin(2x).
The standard deviation of the measurement noise is set as σ=0.2. It can be found that the target output is a linear combination of the outputs f_1 and f_2. The other source outputs, which have different order (f_3) or zero points (f_4), are set as less-correlated sources. The n=30 observations for each source are evenly spaced in 𝒳_1, and n_t=10 observations for the target are evenly spaced in the left domain, x ∈ [0,3]. The n_test=60 test points are sampled uniformly in 𝒳_1. Note that under such settings, the MAE at these test points contains both the interpolation error and the extrapolation error.
Considering that the target is a combination of two sources, we benchmark with another method denoted as MGCP-T, which only uses the source outputs f_1 and f_2 to construct a non-regularized MGCP model. MGCP-T is set as the underlying true model and possesses the true covariance structure, wherein negative transfer will not happen. It presents the optimal predictive performance in all introduced methods.
Figure <ref> shows the boxplots of MAE in these two examples, and fig:case1 1D results shows the data of each source and the predicted trends of the target in one repetition of the 1D example.
Firstly, we focus on three methods, MGCP-R, MGCP-T and MGCP.
The results shown in fig:case1 boxplot illustrate the superior performance of our method.
MGCP-R performs similarly with MGCP-T and provides much more accurate and stable prediction than MGCP. Note that MGCP-T is the true model with the smallest median and variance value of MAE. This result exactly verifies the conclusion claimed in Section <ref> that our regularized model possesses the ability of selecting informative sources. The negative transfer of information caused by f_3 and f_4 is greatly reduced in the proposed method.
To state the above conclusion more clearly, we compare part of the estimated parameters of MGCP-T, MGCP-R and MGCP in one repetition of the 1D example. <ref> shows the parameters belonging to the smooth kernels g_it,i∈ℐ^S, which connect the target and each source. As shown in the table, three methods provide similar estimators except the scaling parameters in g_3t and g_4t, which are shrunk to nearly zero in MGCP-R but not in MGCP. We can also directly observe from fig:case1 1D results, a visualization of <ref>, that the predictive mean of the target by MGCP presents an obvious linear shift-up in the right domain, and the sources f_3 and f_4 also shift up linearly at the same area. Moreover, we would like to mention that our method is robust towards both the linear correlation and the non-linear correlation. As the correlation analysis shown in <ref>, in the 1D example, the Pearson correlation between f_4 and f_t is very high while the Kendall correlation between them is low. MGCP-R is not misled by the high linear correlation between f_4 and f_t since it can comprehensively consider both the linear and non-linear relationships and their combinations.
In addition, all source outputs are correlated with each other, which is not listed in <ref>.
An interesting observation is that the MGCP-RF performs only comparable with the MGCP-R in fig:case1 boxplot, although it predicts a little more accurately and also selects the two informative sources in one repetition presented in <ref> and fig:case1 1D results.
We believe this is due to its larger parameter space. Under same circumstances, MGCP-RF needs 50% more parameters to construct, which poses great challenge in parameter estimation. The following analysis on higher dimension inputs and/or outputs in Sections <ref> and <ref> also confirms our findings.
The median of prediction error of BGCP-R is the largest. This is because BGCP-R focuses on the information transfer from each individual source pair-wisely and cannot incorporate the information of all sources globally. As a result, the negative transfer happens to BGCP-R when the target contains combination of sources, i.e., which leads to larger prediction error than only using the target data (GCP). This is one of major shortcomings of BGCP-R since there are very rare cases in practice that the target and source share the same functional form. From the results in fig:case1 1D results, we can observe this influence more clearly: the predictive mean of BGCP-R has a similar valley shape with f_1 in the right domain.
Finally, the influence of the tuning parameter γ is worth attention, which serves as a similar role of the tuning parameter in LASSO, i.e., there will be a continual selection path as we increase the value of γ. The larger γ is, the less sources will be selected, which means too-large γ may bring negative influence due to the exclusion of some relatively-weak-informative sources. To demonstrate this, we apply MGCP-R to model f_1, f_2 and f_t with varying values of γ. More details and experiment results can be found in Appendix <ref>.
§.§ Simulation case 2
In this subsection, we apply the proposed framework to transfer information from the source with inconsistent input domain to the target. We adopt p=3 source outputs:
f_1(x_1) =3sin(x_1),
f_2(x_1,x_2) =4cos(2x_1)+x_2^2+x_2,
f_3(x_1,x_2) =2sin(2x_1)+x_2^2,
and one target output:
f_t(x_1,x_2)=2sin (x_1)+x_2^2+x_2.
where x_1 ∈𝒳_1 = [-2,2] and x_2 ∈𝒳_2 = [-2,2]. The standard deviation of measurement noise is also set as σ=0.2. In this case, the source f_1 has the inconsistent input domain with the target. Besides, f_1 is set as the obtained mean of marginal distribution after our domain adaptation method.
According to the notations in Section <ref>, for the source f_1, the common input feature is x^(c)=x_1 and the unique input feature is x^(t)=x_2. Thus, following the procedure of DAME, we firstly generate n_1^'=8 induced data for f_1, {x^(c)_1^',a, y_1^',a}_a=1^8 evenly spaced in 𝒳_1. Then, choose another eight points {x^(t)_1^'',b}_b=1^8 evenly spaced in 𝒳_2.
The 64 pseudo data of the source f_1 can be obtained through eq:pseudo data, where ϵ_a,b∼𝒩(0, 0.2^2) is the same as the measurement noise of target.
For the other two sources, n=64 sample points are generated at the same location in 𝒳_1 ×𝒳_2.
For the target oputput, n_t=24 sample points are located at the nodes of a 3 × 8 grid in [0,2]×[-2,2], and n_test=100 test points are uniformly spaced in 𝒳_1 ×𝒳_2.
In order to identify the effect of our domain adaptation method, the first source's data are not used in MGCP and BGCP-R.
The results shown in fig:case2 boxplot and fig:case2 results demonstrate the effectiveness of our modeling framework, especially the DAME approach. As the observations of the target are located in the half domain [0,2]×[-2,2], the information of the target's behavior along x_1 can only be borrowed from the source f_1, which leads to the superior performance of the proposed method in the boxplot of MAE.
Predictive results of one repetition in fig:case2 results also verify the above conclusion, where we can clearly see that MGCP-R is the only method providing accurate fitting in both x_1 and x_2 directions.
Besides, as shown in the boxplot, the predictive accuracy of MGCP and BGCP-R are better than GCP, because f_2 and f_3 can also provide some beneficial information to the target prediction. However, as more informative information along x_1 is contained in the first source, our method can effectively leverage this knowledge and predict the target more accurately.
§.§ Simulation case III
The above simulation cases have demonstrated the effectiveness of our method with small number of sources and input dimensions. In this section, we aim to verify the performance of MGCP-R with more sources and higher input dimensions.
Setting 1: To test the performance with more sources, we adopt similar setting in the 1D example of Section <ref> and define the following four kinds of functions:
f_k(x) =0.3(x-2.5-e_1^k)^3,
f_n_e+k(x) =0.3x^2+2sin(2x+e_2^k),
f_2n_e+k(x) =(x-1.5-e_3^k)^2,
f_3n_e+k(x) =(x-1)(x-2)(x-3.5-e_4^k) ,
where n_e is the number for each kind of sources, e_i^k are uniformly sampled from [0,1], and k={1,...,n_e}.
We define the target output as:
f_t(x)=0.2(x-2.5-e_1^1)^3+0.15x^2+sin(2x+e_2^1).
In this setting, we take n_e = 2,4,10, thus the maximum number of outputs are 41 (including the target). We keep the other settings same as simulation case I and repeat the experiments 50 times.
Setting 2: Regarding the ability of our method with higher input dimensions, we define the following three kinds of sources:
f_k(x) =3∑_j=1^2sin(x_j+e_1j^k) ,
f_n_e+k(x) =4∑_j=1^2cos(2x_j+e_2j^k)+x_3^2+x_3+2x_4-x_5,
f_2n_e+k(x) =2∑_j=1^2sin[2(x_1+e_3j^k)]+x_3^2-x_4+2x_5,
where e_ij^k are uniformly sampled from [-0.25,0.25].
The target output is:
f_t(x)=2∑_j=1^2sin(x_j+e_1j^1)+x_3^2+x_3+2x_4-x_5.
In this setting, we take n_e = 1,2, thus the maximum number of outputs are 7. The number of input dimension is 5 and the inconsistent dimensions are 3.
Similarly to the simulation case II, {f_i(x)}_i=1^n_e is set as the mean of two-dimensional marginal distribution. To apply the domain adaptation method to these sources, we first generate n_1^'=10 induced data in 𝒳_1={x_1,x_2} from x∼𝒩(0,I_2). Then, we randomly choose 10 points in 𝒳_2={x_3,x_4,x_5} (x∼𝒩(0,I_3)) for each induced data to generate 100 pseudo data. For the other sources, n=100 observations for each source are sampled randomly from x∼𝒩(0,I_5).
For the target, 150 samples are generated in the same way and 50 of them, which satisfies x_1>0, are picked as training data. Then, another 150 samples are randomly generated as test data.
The average prediction error shown in <ref> reveals that MGCP-R still performs the best under more sources and higher input dimensions. For a moderate number of sources (40 when n_e=10), MGCP has severe negative transfer effect compared with GCP. As n_e increases, the difference between MGCP-RF and MGCP-R gets larger, which is expected due to the higher parameter space of MGCP-RF, and thus a large number of hyper-parameters to be optimized.
We also provide the average optimization and prediction time in <ref> for one random start (five random starts in one repetition). The computational load of MGCP-RF is much heavier than other methods, which is a severe drawback. Comparing MGCP-R and MGCP, their prediction time is close but the former's optimization time is less, which is another advantage of regularization. For BGCP-R, it needs less optimization time than MGCP-R in setting 1 due to its smaller parameter space, which makes local optima more easy to reach. However, BGCP-R's optimization time in setting 2 is more than MGCP-R. This is because the difference between their parameter dimensions is smaller than setting 1, and inversing the covariance matrix (O(qn^3+n_t^3) for MGCP-R, O(q(n+n_t)^3) for BGCP) in optimization dominates the computational complexity.
§.§ Real case of ceramic manufacturing
In this real case study, the goal is to predict the response surface of ceramic product's density.
§.§.§ Data description
The data are collected through two groups of experiments differing in manufacturing methods and process parameters.
The first group contains 28 (4 × 7) experiments using dry pressing manufacturing technique under 4 pressures and 7 temperatures.
The second group contains 16 (4 × 4) experiments using stereolithography-based additive manufacturing under 4 solids loadings and 4 temperatures.
Table <ref> summarizes the values of controlled parameters and all other process parameters are kept fixed in each group.
As only the temperature is the shared input parameter, the domain adaptation is needed for leveraging information from the data of one manufacturing method to the other.
Two methods, mass-volume method and Archimedes method, are used to measure the density, so there are two sets of measurement for each group.
The overall 4 datasets are shown in fig:real case data, where the first index of dataset represents the manufacturing method and the second index represents the measurement method.
Density data of each dataset are standardized to have zero mean and unit variance.
Note that for the same group of experiments, two measurement methods give different response surfaces because the size of ceramic product is small.
In this case, the measurement error of Archimedes method might be higher, resulting in negative transfer if we incorporate it in the transfer learning.
We treat the density data of 1-1 as the target output and the remaining 3 datasets as source outputs.
For the target, only 8 data points are randomly chosen as observations and the rest 20 points are used for testing.
For MGCP and BGCP-R, only 1-2, which have the same input domain as the target, is used as the source data in the model. In such condition, MGCP degenerates to a two-output Gaussian Process model, so the main difference between it and BGCP-R is that the regularization in the latter model provides the ability to reduce the negative transfer of knowledge.
For our method, we apply DAME to the sources 2-1 and 2-2 as follows. Firstly marginalize the original data to the input domain only with `temperature' feature. Then conduct kernel regression to obtain the mean of marginal distribution. In this case, we use 7 induced points to keep up with the number of target data. Then expand them to the target input domain and obtain 28 pseudo data. Note that the adaptation process of 2-1 has been shown in fig:domain adaptation before. Thus, we have equal number of data for the original dataset 1-2 and the pseudo dataset of 2-1, 2-2, i.e., n_1-2=n_2-1=n_2-2=28. Finally, they are taken as 3 source outputs to establish a regularized MGCP model, where L_1 norm regularization is implemented.
§.§.§ Performance Evaluation
Firstly we provide some intuitive understanding of the advantage of our method.
From the experimental data, it can be found that temperature is the key factor affecting the density of ceramic products.
The trend of density in 1-1 is similar to that in 2-1, which makes it feasible to transfer information from 2-1 to 1-1, i.e., from one manufacturing method to another.
This application is highly desirable in real world as the cost of lab experiment is highly expensive.
For example, each data point in fig:real case data takes 20 hours to produce.
Borrowing knowledge from previous research or experiments can greatly reduce the generation of new samples, and acquire a more accurate response surface efficiently and cheaply.
We repeat the case 50 times and the results are shown in Table <ref>.
The mean and variance of MAE illustrates that MGCP-R outperforms the other benchmarks, with the help of regularization and data from other manufacturing method.
The results of full-covariance method MGCP-RF are close to MGCP-R, as the optimization problem in large parameter space is not severe for MGCP-RF with a small number of data. Nevertheless, the larger variance of MGCP-RF comparing to MGCP-R and the lower computational complexity of MGCP-R still demonstrate the superiority of our proposed method.
The performance of BGCP-R is almost the same as GCP, while MGCP performs worst in all methods.
This suggests that the information transferred from the source 1-2 misleads the prediction for the target in MGCP, but BGCP-R reduces its influence to nearly zero through regularization.
From the predictive results in fig:caseReal results, we can clearly see that MGCP-R is capable of recovering the response surface more accurately with only a few experimental samples, when some historical sources can offer some informative information.
§ CONCLUSION
We propose a regularized MGCP modeling framework that can select informative source outputs globally and transfer information from sources with both consistent input domain and inconsistent input domains.
Our work firstly conducts convolution process to establish a special covariance structure that models the similarity within and across outputs. Then, a regularized maximum log-likelihood estimation is performed based on the structure.
Some statistical properties are also derived to guarantee the effectiveness of our method.
A domain adaptation approach based on marginalization and expansion deals with the inconsistent input domain of sources successfully.
Both simulation cases and the real case of ceramic manufacturing demonstrate the superiority of our method.
There are several open topics worthy of investigation in the future based on our work.
The first one is to apply our method to classification problems, where the posterior distribution needs to be approximated as it doesn't have an explicit form.
One important issue of classification problems is that the data usually contain considerable amount of features, e.g., gene expression, which increases the complexity and computational burden of GP model.
Therefore, the selection of informative sources and critical features should be combined together, and computationally-efficient algorithms are needed to train the model with high-dimensional data.
The second one is considering the correlated noise.
For example, in time series analysis, the auto-correlated noise should be considered, which may greatly increase the prediction accuracy and improve the flexibility of the MGCP model.
Thirdly, in the proposed approach, MGCP modeling and domain adaptation are treated as two separate tasks. Jointly optimizing these two tasks in a unified framework will be studied in future.
§ ACKNOWLEDGMENTS
§ ACKNOWLEDGMENT
10
url@samestyle
Williams2006
C. K. Williams and C. E. Rasmussen, Gaussian processes for machine learning.1em plus 0.5em minus 0.4emMIT press Cambridge, MA, 2006, vol. 2, no. 3.
Shi2011
J. Q. Shi and T. Choi, Gaussian process regression analysis for functional data.1em plus 0.5em minus 0.4emCRC Press, 2011.
Williams1996
C. K. Williams and C. E. Rasmussen, “Gaussian processes for regression,” 1996.
Stegle2008
O. Stegle, S. V. Fallert, D. J. MacKay, and S. Brage, “Gaussian process robust regression for noisy heart rate data,” IEEE Transactions on Biomedical Engineering, vol. 55, no. 9, pp. 2143–2151, 2008.
Boyle2005
P. Boyle and M. Frean, “Dependent gaussian processes,” Advances in Neural Information Processing Systems, vol. 17, pp. 217–224, 2005.
Haas1996
T. C. Haas, “Multivariate geostatistics: an introduction with applications,” Journal of the American Statistical Association, vol. 91, no. 435, pp. 1375–1377, 1996.
Pan2009
S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, 2009.
Cho2019
W. Cho, Y. Kim, and J. Park, “Hierarchical anomaly detection using a multioutput gaussian process,” IEEE Transactions on Automation Science and Engineering, vol. 17, no. 1, pp. 261–272, 2019.
Chai2008
K. M. Chai, S. Klanke, C. Williams, and S. Vijayakumar, “Multi-task gaussian process learning of robot inverse dynamics,” 2008.
Liu2018
H. Liu, J. Cai, and Y.-S. Ong, “Remarks on multi-output gaussian process regression,” Knowledge-Based Systems, vol. 144, pp. 102–121, 2018.
Conti2010
S. Conti and A. O’Hagan, “Bayesian emulation of complex multi-output and dynamic computer models,” Journal of Statistical Planning and Inference, vol. 140, no. 3, pp. 640–651, 2010.
Goovaerts1997
P. Goovaerts et al., Geostatistics for natural resources evaluation.1em plus 0.5em minus 0.4emOxford University Press on Demand, 1997.
Goulard1992
M. Goulard and M. Voltz, “Linear coregionalization model: tools for estimation and choice of cross-variogram matrix,” Mathematical Geology, vol. 24, no. 3, pp. 269–286, 1992.
Majumdar2007
A. Majumdar and A. E. Gelfand, “Multivariate spatial modeling for geostatistical data using convolved covariance functions,” Mathematical Geology, vol. 39, no. 2, pp. 225–245, 2007.
Alvarez2011
M. A. Alvarez and N. D. Lawrence, “Computationally efficient convolved multiple output gaussian processes,” The Journal of Machine Learning Research, vol. 12, pp. 1459–1500, 2011.
Rosenstein2005
M. T. Rosenstein, Z. Marx, L. P. Kaelbling, and T. G. Dietterich, “To transfer or not to transfer,” in NIPS 2005 workshop on transfer learning, vol. 898, 2005, pp. 1–4.
Weiss2016
K. Weiss, T. M. Khoshgoftaar, and D. Wang, “A survey of transfer learning,” Journal of Big Data, vol. 3, no. 1, pp. 1–40, 2016.
Kontar2020
R. Kontar, G. Raskutti, and S. Zhou, “Minimizing negative transfer of knowledge in multivariate gaussian processes: A scalable and regularized approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
Kontar2018
R. Kontar, S. Zhou, C. Sankavaram, X. Du, and Y. Zhang, “Nonparametric modeling and prognosis of condition monitoring signals using multivariate gaussian convolution processes,” Technometrics, vol. 60, no. 4, pp. 484–496, 2018.
Li2013
W. Li, L. Duan, D. Xu, and I. W. Tsang, “Learning with augmented features for supervised and semi-supervised heterogeneous domain adaptation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 6, pp. 1134–1148, 2013.
He2019
L. He, F. Fei, W. Wang, and X. Song, “Support-free ceramic stereolithography of complex overhanging structures based on an elasto-viscoplastic suspension feedstock,” ACS Applied Materials & Interfaces, vol. 11, no. 20, pp. 18 849–18 857, 2019.
Daume2009
H. Daumé III, “Frustratingly easy domain adaptation,” arXiv preprint arXiv:0907.1815, 2009.
Shi2010
X. Shi, Q. Liu, W. Fan, S. Y. Philip, and R. Zhu, “Transfer learning on heterogenous feature spaces via spectral transformation,” in 2010 IEEE International Conference on Data Mining.1em plus 0.5em minus 0.4emIEEE, 2010, pp. 1049–1054.
Xiao2015
M. Xiao and Y. Guo, “Feature space independent semi-supervised domain adaptation via kernel matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 1, pp. 54–66, 2015.
Kasarla2021
P. Kasarla, C. Wang, T. L. Brown, and D. McGehee, “Modeling and prediction of driving performance measures based on multi-output convolutional gaussian process,” Accident Analysis & Prevention, vol. 161, p. 106360, 2021.
Moreno2018
P. Moreno-Muñoz, A. Artés, and M. Alvarez, “Heterogeneous multi-output gaussian process prediction,” Advances in neural information processing systems, vol. 31, 2018.
Bruinsma2020
W. Bruinsma, E. Perim, W. Tebbutt, S. Hosking, A. Solin, and R. Turner, “Scalable exact inference in multi-output gaussian processes,” in International Conference on Machine Learning.1em plus 0.5em minus 0.4emPMLR, 2020, pp. 1190–1201.
Yu2021
Z. Yu, M. Zhu, M. Trapp, A. Skryagin, and K. Kersting, “Leveraging probabilistic circuits for nonparametric multi-output regression,” in Uncertainty in Artificial Intelligence.1em plus 0.5em minus 0.4emPMLR, 2021, pp. 2008–2018.
van2017
M. Van der Wilk, C. E. Rasmussen, and J. Hensman, “Convolutional gaussian processes,” Advances in Neural Information Processing Systems, vol. 30, 2017.
Walker2019
I. Walker and B. Glocker, “Graph convolutional gaussian processes,” in International Conference on Machine Learning.1em plus 0.5em minus 0.4emPMLR, 2019, pp. 6495–6504.
Barry1996
R. P. Barry, M. Jay, and V. Hoef, “Blackbox kriging: spatial prediction without specifying variogram models,” Journal of Agricultural, Biological, and Environmental Statistics, pp. 297–322, 1996.
Fricker2013
T. E. Fricker, J. E. Oakley, and N. M. Urban, “Multivariate gaussian process emulators with nonseparable covariance structures,” Technometrics, vol. 55, no. 1, pp. 47–56, 2013.
Fan2001
J. Fan and R. Li, “Variable selection via nonconcave penalized likelihood and its oracle properties,” Journal of the American Statistical Association, vol. 96, no. 456, pp. 1348–1360, 2001.
Basawa2014
I. V. Basawa, Statistical Inferences for Stochasic Processes: Theory and Methods.1em plus 0.5em minus 0.4emElsevier, 2014.
GPflow2017
A. G. d. G. Matthews, M. van der Wilk, T. Nickson, K. Fujii, A. Boukouvalas, P. León-Villagrá, Z. Ghahramani, and J. Hensman, “GPflow: A Gaussian process library using TensorFlow,” Journal of Machine Learning Research, vol. 18, no. 40, pp. 1–6, apr 2017.
[
< g r a p h i c s >
]Xinming Wang
received the B.S. degree in Mechanical Engineering from Tsinghua University, Beijing, China, in 2020. He is currently working towards the Ph.D. degree in Industrial and System Engineering with Peking University, Beijing, China. His current research interests include data science, transfer learning, and intelligent manufacturing.
[
< g r a p h i c s >
]Chao Wang
is an Assistant Professor in the Department of Industrial and Systems Engineering at the University of Iowa. He received his B.S. from the Hefei University of Technology in 2012, and M.S. from the University of Science and Technology of China in 2015, both in Mechanical Engineering, and his M.S. in Statistics and Ph.D. in Industrial and Systems Engineering from the University of Wisconsin-Madison in 2018 and 2019, respectively. His research interests include statistical modeling, analysis, monitoring and control for complex systems. He is member of INFORMS, IISE, and SME.
[
< g r a p h i c s >
]Xuan Song
is an assistant professor at the department of industrial and systems engineering at the University of Iowa. His research interest is additive manufacturing process development and optimization as well as novel applications of AM technologies in various areas, such as biomedical imaging, tissue engineering, energy harvest, robotics, etc. At UIowa, Dr. Song’s research focuses on the development of next- generation additive manufacturing processes with multi-material, multi-scale or multi-directional capabilities. He obtained his Ph.D. degree in industrial and systems engineering from the University of Southern California in 2016.
[
< g r a p h i c s >
]Levi Kirby
is a PhD student at the University of Iowa. He obtained his Bachelor’s and Master’s Degree from Western Illinois University in engineering technology. At Iowa, his research focuses on various forms of additive manufacturing, including printing of energetic composites and highly dense ceramics. Throughout his collegiate career, he has been awarded the E. Wayne Kay Scholarship, the Departmental Scholar, Magna Cum Laude, and 3MT finalist.
[
< g r a p h i c s >
]Jianguo Wu
received the B.S. degree in Mechanical Engineering from Tsinghua University, China in 2009, the M.S. degree in Mechanical Engineering from Purdue University in 2011, and M.S. degree in Statistics in 2014 and Ph.D. degree in Industrial and Systems Engineering in 2015, both from University of Wisconsin-Madison.
Currently, he is an Assistant Professor in the Dept. of Industrial Engineering and Management at Peking University, Beijing, China. He was an Assistant Professor at the Dept. of IMSE at UTEP, TX, USA from 2015 to 2017.
His research interests are mainly in quality control and reliability engineering of intelligent manufacturing and complex systems through engineering-informed machine learning and advanced data analytics. He is a recipient of the STARS
Award from the University of Texas Systems, Overseas Distinguished Young Scholars from China, P&G Faculty Fellowship, BOSS Award from MSEC, and several Best Paper Award/Finalists from INFORMS/IISE Annual Meeting. He is an Associate Editor of the Journal of Intelligent Manufacturing, and a member of IEEE, INFORMS, IISE, and SME.
§ DERIVATION OF COVARIANCE FUNCTION IN CONVOLUTION PROCESS
For the convolution process:
f_i(x)=g_i(x)∗ Z (x)=∫_-∞^∞g_i (x-u) Z (u) du,
If Z(x) is a commonly used white Gaussian noise process, i.e., cov(Z(x), Z(x^'))=δ(x-x^') and 𝔼(Z(x))=0, then the cross covariance is derived as:
cov_ij^f (x, x^') = cov{ g_i(x)∗ Z (x), g_j(x^')∗ Z (x^')}
=𝔼{∫_-∞^∞g_i (x-u) Z (u) du∫_-∞^∞g_j (x^'-u^') Z (u^') du^'}
=∫_-∞^∞∫_-∞^∞g_i (u)g_j (u^') 𝔼{ Z (x-u) Z (x^'-u^') } dudu^'
=∫_-∞^∞∫_-∞^∞ g_i(u)g_j(u^') δ(x-u-x^'+u^') d udu^'
=∫_-∞^∞ g_i(u)g_j(u-v)d u,
where v=x-x^' and the last equality is based on the property of Dirac function that ∫ g(u^')δ(u^'-x)du^'=g(x).
For our MGCP structure:
y_i(x) =f_i(x)+ϵ_i(x)=g_ii(x)∗ Z_i(x)+ϵ_i(x), i ∈ℐ^S
y_t(x) =f_t(x)+ϵ_t(x)=∑_j ∈ℐg_jt(x) ∗ Z_j(x)+ϵ_t(x),
the source-target covariance function can be calculated as:
cov_it^f(x,x^')
= cov (f_i(x),f_t(x^'))
= cov{g_ii(x)∗ Z_i(x), ∑_j ∈ℐg_jt(x^') ∗ Z_j(x^') }
= ∑_j ∈ℐ cov{ g_ii(x)∗ Z_i(x), g_jt(x^') ∗ Z_j(x^') }
=∫_-∞^∞ g_ii(u)g_it(u-v)du, i ∈ℐ^S
where the last equality is based on eq:cov in convolution process, and v=x-x^'. In the same way, we can derive the auto-covariance as
cov_ii^f(x,x^') =∫_-∞^∞ g_ii(u)g_ii(u-v)du, i∈ℐ^S
cov_tt^f(x,x^') =∑_j ∈ℐ∫_-∞^∞ g_jj(u)g_jt(u-v)du.
§ PROOF OF THEOREM 1
Suppose that g_it(x)=0, ∀ i ∈𝒰⊆ℐ^S for all x∈𝒳.
For notational convenience, suppose 𝒰={1,2,...,h|h≤ q}, then the predictive distribution of the model at any new input x_* is unrelated with {f_1,f_2,...,f_h} and is reduced to:
p(y_t(x_*) | y)=𝒩( k_+^T C_+^-1y_+,
cov_tt^f(x_*,x_*)+σ_t^2-k_+^T C_+^-1k_+),
where k_+=(K_h+1,*^T,...,K_q,*^T,K_t,*^T)^T, y_+=(y_h+1^T,...,y_q^T,y_t^T)^T, and
C_+=
[ C_h+1,h+1 ⋯ 0 C_h+1,t; ⋮ ⋱ ⋮ ⋮; 0 ⋯ C_q,q C_q,t; C_h+1,t^T ⋯ C_q,t^T C_t,t ].
Proof. Recall that
cov_jt^y(x,x^') = cov_jt^f(x,x^')
=∫_-∞^∞ g_jj(u)g_jt(u-v)du,
cov_tt^y(x,x^') = cov_tt^f(x,x^')+σ_t^2 δ(x-x^')
=∑_h ∈ℐ∫_-∞^∞ g_hh(u)g_ht(u-v)du+σ_t^2 δ(x-x^'),
for all j ∈{1,2,...,q}, so g_it(x)=0,i∈{1,2,...,h| h ≤ q} implies that cov_it^y (x,x^')=0 for all i ∈{1,2,...,h} and
cov_tt^y(x,x^')=∑_i=h+1^t ∫_-∞^∞ g_ii(u)g_it(u-v)du+σ_t^2 δ(x-x^').
Therefore, we have that C_i,t=0, i ∈{1,2,...,h} and partition covariance matrix
C=
[ C_- 0; 0 C_+ ],
where
C_-=
[ C_1,1 0 ⋯ 0; 0 C_2,2 ⋯ 0; ⋮ ⋮ ⋱ ⋮; 0 0 ⋯ C_h,h; ].
The predictive distribution at point x_* is
y_t(x_*)∼ N(K_*^T C^-1y, cov_tt^f (x_*,x_*)+σ_t^2-K_*^T C^-1K_*).
Also, based on that cov_it^y(x,x^')=0 for all i ∈{1,2,...,h}, we have that K_*=(0, k_+^T)^T. Let y_-=(y_1^T,...,y_h^T)^T, then y=(y_-^T,y_+^T)^T. Therefore,
K_*^T C^-1y =(0, k_+^T)
[ C_- 0; 0 C_+ ]^-1
(y_-^T,y_+^T)^T
=(0, k_+^T)
[ C_-^-1 0; 0 C_+^-1 ]
(y_-^T,y_+^T)^T
=k_+^T C_+^-1y_+,
K_*^T C^-1K_* =(0, k_+^T)
[ C_- 0; 0 C_+ ]^-1
(0, k_+^T)^T
=k_+^T C_+^-1k_+.
Note that the auto-covariance matrix of target output f_t, C_tt, is also unrelated with observed data {X_i | i=1,2,...,h} which from source output {f_i| i=1,2,...,h}. As a result, the predictive distribution is totally independent on these outputs. Proof completes.
§ REGULARITY CONDITIONS
In this part, we state the regularity conditions for the consistency theorem of the MLE θ̂_#, which are formulated in <cit.>.
Denote y with total N observations as y^N, and let
p_k(θ) = p(y^k| θ) /p(y^k-1| θ)
for each k. Assume p_k(θ) is twice differentiable with respect to θ in a neighborhood of θ^*. Also assume that the support of p(y^N| θ) is independent of θ in the neighborhood. Define ϕ_k(θ) = log p_k(θ), and its first derivative ϕ_k^'(θ), second derivative ϕ_k^''(θ).
For simplicity and without loss of generality, we only consider the conditions for one-dimensional case. Define ϕ_k^*'=ϕ_k^'(θ^*) and ϕ_k^*''=ϕ_k^''(θ^*). Let ℱ_N be the σ-field generated by y_j, 1 ≤ j ≤ N, and ℱ_0 be the trivial σ-field. Define the random variable i_k^* = var(ϕ_k^*' | ℱ_k-1)=𝔼[(ϕ_k^*')^2 | ℱ_k-1] and I_N^* = ∑_k=1^N i_k^*. Define S_N = ∑_k=1^N ϕ_k^*' and S_N^* = ∑_k=1^N ϕ_k^*''+I_N^*. If the following conditions hold:
(c1) ϕ_k(θ) is thrice differentiable in the neighborhood of θ^*. Let ϕ_k^*'''=ϕ_k^'''(θ^*) be the third derivative,
(c2) Twice differentiation of ∫ p(y^N | θ) d μ^N(y^N) with respect to θ of exists in the neighborhood of θ^*,
(c3) 𝔼|ϕ_k^*''| < ∞ and 𝔼|ϕ_k^*'' + (ϕ_k^*')^2| < ∞.
(c4) There exists a sequence of constants K(N) →∞ as N →∞ such that:
(i) K(N)^-1 S_N p→ 0,
(ii) K(N)^-1 S_N^* p→ 0,
(iii) there exists a(θ^*) > 0 such that ∀ϵ > 0, P[K(N)^-1I_N^* ≥ 2a(θ^*)] ≥ 1-ϵ for all N ≥ N(ϵ),
(iv) K(N)^-1∑_k=1^N 𝔼|ϕ_k^*'''| < M < ∞ for all N,
then the MLE θ̂_# is consistent for θ^*. There exists a sequence r_N such that r_N →∞ as N→∞, i.e.,
θ̂_#-θ^* = O_P(r_N^-1).
§ PROOF OF THEOREM 2
Suppose that the MLE for L(θ|y), θ̂_#, is r_N consistent, i.e., satisfying non-penalized maximum log-likelihood estimator consistency. If max{ |ℙ^''_γ(θ_i0^*)|: θ_i0^* ≠ 0}→ 0, then there exists a local maximizer θ̂ of L_ℙ(θ|y) s.t. θ̂-θ^*=O_P(r_N^-1+r_0), where r_0=max{ |ℙ^'_γ(θ_i0^*)|: θ_i0^* ≠ 0}.
Proof. Recall the assumptions in Section <ref>. For the unpenalized log-likelihood L(θ), the MLE θ̂_# is r_N consistent where r_N is a sequence such that r_N →∞ as N →∞. And we have that L^'(θ^*)=O_P(r_N) and I_N(θ^*)=O_P(r_N^2), which are the standard argument based on the consistency of estimator. Based on that, we aim to study the asymptotic properties of the penalized likelihood L_ℙ(θ)=L(θ)-r_N^2ℙ_γ(θ_0). Here we multiply the penalty function by r_N^2 to avoid that penalty term degenerates as N →∞. The following proof is similar to that of Fan and Li <cit.> but based on dependent observations.
To prove theorem 2, we need to show that for any given ϵ>0, there exists a large constant U such that:
P{u=Usup L_ℙ(θ^*+r_N^+u)<L_ℙ(θ^*) }≥ 1-ϵ,
where r_N^+=r_N^-1+r_0. This implies that with probability at least 1-ϵ there exists a local maximum in the ball {θ^*+r_N^+u: u≤ U}. So the local maximizer θ̂ satisfies that θ̂-θ^*=O_P(r_N^+).
By ℙ_γ(0)=0, we have
L_ℙ(θ^*+r_N^+u)-L_ℙ(θ^*)
≤ L(θ^*+r_N^+u)-L(θ^*)
-r_N^2∑_i=h+1^q [ ℙ_γ(|θ_i0^*+r_N^+u_i0|)-ℙ_γ(|θ_i0^*|) ],
where h and q are the number of zero components and all components in θ_i0^*, and u_i0 is the element corresponding to θ_i0 in u. Let I_N(θ^*) be the finite and positive definite information matrix at θ^* with N observations. Applying a Taylor expansion on the likelihood function, we have that
L_ℙ (θ^*+r_N^+u)-L_ℙ(θ^*)
≤ r_N^+ L^'(θ^*)^Tu-1/2(r_N^+)^2u^T I_N(θ^*)u[1+o_P(1)]
-r_N^2∑_i=h+1^q { r_N^+ ℙ_γ^'(|θ_i0^*|) sign (θ_i0^*) u_i0
+1/2(r_N^+)^2ℙ_γ^''(|θ_i0^*|)u_i0^2[1+o_P(1)] },
Note that L^'(θ^*)=O_P(r_N) and I_N(θ^*)=O_P(r_N^2). so the first term on the right-hand side of t2-proof-Taylor is on the order O_P(r_N^+r_N), while the second term is O_P( (r_N^+r_N)^2 ). By choosing a sufficient large U, the first term can be dominated by the second term uniformly in u=U. Besides, the absolute value of the third term is bounded by
√(q-h)r_N^2r_N^+r_0 u+(r_Nr_N^+)^2 max{ |ℙ^''_γ(θ_i0)|: θ_i0≠ 0}u^2,
which is also dominated by second term as it is on the order of o_P((r_Nr_N^+)^2 ). Thus, t2-proof-target holds and the proof completes.
§ PROOF OF THEOREM 3
Let θ_10^* and θ_20^* contain the zero and non-zero components in θ_0^* respectively.
Assume the conditions in <ref> also hold, and θ̂ is r_N consistent by choosing proper γ in ℙ_γ(θ_0). If
N →∞liminf θ→ 0^+liminf γ^-1ℙ^'_γ(θ) >0 and (r_N γ)^-1→ 0,
then
N →∞limP ( θ̂_10=0)=1.
Proof. To prove this theorem, we only need to prove that for a small ϵ_N=Ur_N, where U is a given constant and i=1,...,s,
∂ L_ℙ (θ)/∂θ_i0θ_i0<0, 0<|θ_i0|<ϵ_N.
By Taylor's expansion,
∂ L_ℙ (θ)/∂θ_i0
=∂ L (θ)/∂θ_i0-r_N^2 ℙ_γ^'(|θ_i0|) sign (θ_i0)
=∂ L (θ^*)/∂θ_i0+[ ∂( ∂ L (θ^*)/∂θ_i0)/ ∂θ]^T(θ-θ^*)[1+o_P(1)]
-r_N^2ℙ_γ^'(|θ_i0|) sign (θ_i0).
As ∂ L (θ)/∂θ_i0=O_P(r_N), ∂( ∂ L (θ^*)/∂θ_i0)/ ∂θ_j =O_P(r_N^2) by the standard argument for r_N consistent estimator, thus
∂ L_ℙ (θ)/∂θ_i0 =O_P(r_N)-r_N^2ℙ_γ^'(|θ_i0|) sign (θ_i0)
=r_N^2γ( O_P(1/r_Nγ)-γ^-1ℙ_γ^'(|θ_i0|) sign (θ_i0) ).
Because that N →∞liminf θ→ 0^+liminf γ^-1ℙ^'_γ(θ) >0 and (r_N γ)^-1→ 0, ∂ L_ℙ (θ)/∂θ_i0 will be positive while θ_i0 is negative and vise versa. As a result, t3-proof-target follows. Proof completes.
§ INTERPRETATION OF THE BENCHMARK: MGCP-RF
The illustration of MGCP-RF is shown in fig:structure-MGCP-RF.
In this structure, target f_t is generated by three kinds of latent process: Z_0(x), {Z_i (x)}_i=1^q and Z_t(x). As Z_0(x) is the common process shared by sources, the covariance matrix blocks between source f_i and the other outputs are zero only when the scale parameters in g_0i(x) and g_it(x) are zero simultaneously. Thus, the marginalized covariance matrix C_+ in Theorem 1 will be:
C_+=
[ C_h+1,h+1 ⋯ C_h+1,q C_h+1,t; ⋮ ⋱ ⋮ ⋮; C_h+1,q^T ⋯ C_q,q C_q,t; C_h+1,t^T ⋯ C_q,t^T C_t,t ].
The difference to MGCP-R is that covariance among the remaining sources {f_i}_i=h+1^q can be modeled. This structure is indeed more comprehensive but with the cost of a half more parameters than MGCP-R. The cost will increase if we use more latent process to model the correlation among sources.
To realize the effect of shrinking g_0i(x) and g_it(x) at the same time, group-L1 penalty is used and the penalized log-likelihood function is:
θ max L_ℙ(θ| y)= L(θ| y)-γ∑_i=1^q √(α_0i^2+α_it^2),
§ INFLUENCE OF TUNING-PARAMETER
To test the influence of the tuning-parameter γ in our model, we conduct the following experiment. Based on the same dataset in the 1D example of simulation case I, we construct MGCP-R model only with sources f_1 and f_2, and let γ vary from 0 to 10 at a step of 1. Note that MGCP-T is equal to the model with γ=0. The boxplot of MAE with respect to different values of γ is shown in Fig. <ref>. The estimated value of α_1t, α_2t in one repetition is presented in Fig. <ref>. It can be seen that as γ increases, source f_2 will be excluded from the prediction of target, leading to an increased prediction error. In practice, cross-validation can be used to select an optimal tuning-parameter.
|
http://arxiv.org/abs/2409.03329v1 | 20240905080612 | Stellar Atmospheres | [
"Joachim Puls",
"Artemio Herrero",
"Carlos Allende Prieto"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.IM"
] | |
http://arxiv.org/abs/2409.02686v1 | 20240904131709 | Deconfounded Causality-aware Parameter-Efficient Fine-Tuning for Problem-Solving Improvement of LLMs | [
"Ruoyu Wang",
"Xiaoxuan Li",
"Lina Yao"
] | cs.CL | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
Deconfounded Causality-aware Parameter-Efficient Fine-Tuning
Wang et al.
University of New South Wales Commonwealth Scientific and Industrial Research Organisation, Australia
Deconfounded Causality-aware Parameter-Efficient Fine-Tuning for Problem-Solving Improvement of LLMs
Ruoyu Wang 1 Xiaoxuan Li 1 Lina Yao 1,2
September 9, 2024
====================================================================================================
§ ABSTRACT
Large Language Models (LLMs) have demonstrated remarkable efficiency in tackling various tasks based on human instructions, but recent studies reveal that these models often fail to achieve satisfactory results on questions involving reasoning, such as mathematics or physics questions. This phenomenon is usually attributed to the uncertainty regarding whether these models could genuinely comprehend the knowledge embedded in the text or merely learn to replicate the token distribution without a true understanding of the content. In this paper, we delve into this problem and aim to enhance the reasoning capabilities of LLMs. First, we investigate if the model has genuine reasoning capabilities by visualizing the text generation process at the attention and representation level. Then, we formulate the reasoning process of LLMs into a causal framework, which provides a formal explanation of the problems we observe in the visualization. Finally, building upon this causal framework, we propose Deconfounded Causal Adaptation (DCA), a novel parameter-efficient fine-tuning (PEFT) method to enhance the model's reasoning capabilities by encouraging the model to extract the general problem-solving skills and apply these skills to different questions. Experiments show that our method outperforms the baseline consistently across multiple benchmarks, and with only 1.2M tunable parameters, we achieve better or comparable results to other fine-tuning methods. This demonstrates the effectiveness and efficiency of our method in improving the overall accuracy and reliability of LLMs.
§ INTRODUCTION
Recent years have witnessed remarkable progress on Large Language Models (LLMs) <cit.>, especially those instruction-following models such as ChatGPT and GPT-4 <cit.>. Numerous studies have demonstrated that these models exhibit strong capabilities across a wide range of tasks. However, despite the effectiveness of these models, existing work <cit.> shows that they perform poorly on Out-of-Distribution tasks, so fine-tuning with specific tasks and datasets is required to achieve satisfactory results.
Nevertheless, fine-tuning large-scale LLMs in full is often prohibitively costly, thus many Parameter-Efficient Fine-Tuning (PEFT) methods have been proposed in recent years, which transform a non-prompt-following model into a prompt-following model by injecting a small number of extra model parameters (Figure <ref>), thereby greatly decreasing the computational and storage costs. Recent State-of-the-Art PEFT techniques achieve performance comparable to that of full fine-tuning <cit.>.
While these prompt-following models or fine-tuning methods have been proven to be effective in generating responses based on human instructions, there remains uncertainty regarding whether these models have genuinely acquired knowledge from the text or merely learned the distribution of the word tokens without true comprehension. <cit.> claimed that the scaling up of language models could significantly enhance their performance, which is usually seen as a piece of evidence that the LLMs can acquire knowledge when it's sufficiently large. However, <cit.> claims that emergent abilities only appear for specific metrics, and <cit.> suggests that these models do not possess any causal reasoning abilities.
Many discussions have been raised regarding this issue, yet the answer remains inconclusive. Besides, most of these discussions are raised on GPT models, and it is rarely addressed in the context of LLM Fine-tuning. Therefore, we investigate this issue in the context of LLM Fine-tuning and propose a novel Parameter Efficient Fine-Tuning (PEFT) method based on Causal Inference techniques to improve the reasoning capabilities of the models. In particular, we first investigate if the model has genuine reasoning capabilities by visualizing the reasoning process at the attention and representation level. Then, we formulate the reasoning process of LLMs into a causal framework, which provides a formal explanation of the problems we observe in the visualization. Finally, we propose Deconfounded Causal Adaptation (DCA), a novel fine-tuning method to improve the model's reasoning capability, and experimentally show the effectiveness and efficiency of our method. The contribution of our paper is three-fold:
* We investigate the text generation process of an instruction-following model by visualization in the level of attention and representation, and present empirical evidence that the model lacks genuine causal reasoning capabilities;
* We formulate the reasoning process of LLMs in a causal framework, formally explaining the reasons for the observed failure cases in the visualization;
* We propose Deconfounded Causal Adaptation (DCA), a novel fine-tuning method to improve the reasoning capability of LLMs, and experimentally demonstrate the effectiveness of our method, which achieves strong performance with only 1.2 Million tunable parameters.
§ PRELIMINARY
§.§ LLAMA-Adapter
LLaMA-Adapter <cit.> is a lightweight adaption method to fine-tune LLaMA into an instruction-following model, which has demonstrated the capability to generate high-quality responses. We conducted our study and built our method based on LLaMA-Adapter due to its effectiveness and efficiency.
The architecture of LLaMA-Adapter is illustrated in Figure <ref>. For each of the topmost L Transformer layers of LLaMA, an adaption prompt T_l∈ℝ^M × C is concatenated to the original prompt P_l∈ℝ^K × C along the token dimension:
[ P_l; T_l] ∈ℝ^(K+M) × C
where M denotes the length of the adapter to be concatenated, K denotes the original prompt length for each transformer layer, and C denotes the feature dimension of LLaMA’s transformer. This concatenation operation is applied to the corresponding dimension in Key and Value in the self-attention mechanism.
Further, a zero-init attention mechanism with zero gating is proposed to improve the training by injecting the new instructional cues into LLaMA. While calculating the attention score, the softmax function is applied independently to the two components in Equation <ref>, and multiplies the concatenated term by a gating factor g_l, as illustrated in Equation <ref> and Figure <ref>.
S_l^g = [ Softmax(S_l^K); Softmax(S_l^M) · g_l]^T
We highlighted part of the architecture of the LLaMA-Adapter that is closely related to our method. We direct interested readers to <cit.> for comprehensive details of this method.
§.§ Causal Inference
In the domain of Causality <cit.>, causal relationships are usually denoted by Directed Acyclic Graph (DAG). For example, in Figure <ref>, X → Z denotes that X is a direct cause of Z. There are three basic building blocks in a causal graph: Chain, Fork, and Collider. Chain is the case where one element causally influences another, then leading to the causal impact on a third element, such as X → Z → Y in Figure <ref>. Fork is the case where one element causally influences two other elements, such as X ← C → Y in Figure <ref>. Collider is the case where two elements causally influence a third element such as C → Y ← Z in Figure <ref>.
Confounder If a variable is the common cause of two other variables, it is called a confounder. Confounders will induce spurious correlations between the two variables, thus disturbing the recognition of the causal effect between them. For example, in Figure <ref>, C is a confounder between X and Y. The association between X and Y include the spurious correlations created by the confounder C (X ← C → Y) which is non-causal, and the goal of causal inference is to deconfound the spurious correlations so that the true causal relationships between X and Y (X → Z → Y) can be measured.
Intervention In order to measure the causal effect between X and Y, we need to avoid the association flow through the fork X ← C → Y by blocking the path C → X. To this end, we force the variable X = x regardless the value of C. In that case, C no longer affects the value of X and thus path C → X is blocked. This process is called intervention in causal inference and is denoted as do(X=x) (Figure <ref>). In contrast to P(Y|X), which comprises both causal association and spurious correlations caused by confounder, P(Y|do(X)) allows us to measure the genuine causal effect between X and Y.
§ OUR METHOD
§.§ Investigation and Motivation
As discussed in Section <ref>, we aim to investigate if the prompt-following models have genuine causal reasoning capabilities. To this end, we conduct the following experiments. Since models such as ChatGPT and GPT-4 are not available in open-source form, we conduct our study using LLaMA-Adapter <cit.> to gain access to attention values and representations at each layer.
First, we fine-tune the LLaMA 7B model with LLaMA-Adapter using the Letter Concatenation dataset, which will be introduced in Section <ref>. Then, we test the model with two prompts below. The only difference between these two prompts lies in the string within the quotation marks, and as a result, the model answered Prompt A correctly, but failed on Prompt B.
Prompt A: Take the second last letters of the words in “GALLEGOS MORAN” and concatenate them;
Prompt B: Take the second last letters of the words in “DAVENPORT MAGANA” and concatenate them.
To explore the cause of the model's failure on Prompt B, we visualize the attention values in the text generation process by adapting BertViz <cit.>, and conduct a thorough comparison between the two test cases on the attention heat map of each attention head across all transformer layers. Consequently, we found that the model's failure on Prompt B can be attributed to the malfunctioning of some particular adapter structures.
Figure <ref>-<ref> provides an example of such malfunctioning structures, where we present the attention values of the sixth element in the adapter (adap_6) located in the 32nd attention head of the last transformer layer of LLaMA-Adapter. We observed that when the model correctly predicts the answer (Figure <ref>), adap_6 tends to focus on the question rather than the value of the string. However, in Figure <ref>, where the model failed to provide the correct answer, it exhibits a focus on a portion of the string, such as token “AG” and “AN” as highlighted. Similar patterns can also be observed in many other cases. Therefore, we empirically conclude that such malfunctioning units are the root cause of the mistake the model made on Prompt B.
In other words, simply replacing the string within the quotation marks significantly affects the thinking process of the model. This behaviour starkly contrasts with how humans solve such questions. From a human perspective, Prompt A and Prompt B are nearly identical, if we understand how to solve one of these problems, we inherently possess the method to solve all similar questions. This is because humans understand the world through causal relationships, enabling us to recognize and comprehend the underlying rationales. In contrast, LLMs were constructed based on statistical associations, leading to a deficiency in their capacity to comprehend the question and to do causal reasoning.
Hence, our empirical finding suggests a deficiency in the model's comprehension of the task, as mere string value changes influence the attention mechanism's behaviour. These observations motivate us to enhance the reasoning abilities of these models. Therefore, we introduce our method to improve response quality by fostering the model's capability of causal reasoning. Following this idea, we first formulate the reasoning process of LLMs into a causal framework in Section <ref>, and then propose our causal Fine-tuning method in Section <ref>.
§.§ Method Specification
We formulate the reasoning process of LLMs into a causal framework, as illustrated in Figure <ref>. In this framework, X denotes the encoded feature of the prompt, K denotes the relevant knowledge to solve the problem provided by the LLM, and Y denotes the LLM's response to the query.
LLM → X When a prompt is presented to the LLM, it encodes the prompt into feature X. Therefore, LLM is the direct cause of X.
LLM → K ← X Once the prompt is encoded, the LLM offers the relevant knowledge K required to solve the problem in X. Therefore, both the LLM and X are direct causes of K.
K → Y ← X The knowledge K encompasses the method on how to solve the problem described in X, while X contains the question-specific information, such as the values involved in the problem. So both X and K are a cause of Y.
As demonstrated in Section <ref>, the prompt feature X comprises two independent semantics, one encompasses general problem-solving information, and the other one contains problem-specific information.
Taking this into consideration, we introduce two additional elements to the graph, namely, the general problem-solving information X_G, and the problem-specific information X_S. Both elements are derived from X, X_G serves as a cause of the problem-solving knowledge K, and X_S acts as a mediator between X and Y.
In this framework, X_G and X_S should be strictly independent because it's common sense that the problem does not affect the problem-solving skill set. For instance, in the letter concatenation problems, the value of the string within the quotation marks should be independent of the method we use to locate, fetch and concatenate the desired characters.
However, based on the causal inference theory introduced in Section <ref>, the independence between X_G and X_S is not guaranteed. Although there are no direct causal relationships between the two elements, X acts as a confounder between X_G and X_S and thus creates spurious associations between them. This explains the phenomenon we observed in Figure <ref>-<ref>, where altering the value of X_S (the string within the quotation marks) affects the reasoning process X_G (the functionality of adap_6).
Therefore, to deconfound the spurious association between X_G and X_S, we perform an intervention on X_G to block the association from flowing through the path X_G← X → X_S, as demonstrated in Figure <ref>. In that case, changing X_S will no longer affect the reasoning process of X_G.
§.§ Implementation of Causal Intervention
In this section, we introduce our method to implement the intervention on X_G, as illustrated in Figure <ref>. First, we assume that the general problem-solving information X_G and the problem-specific information X_S can be identified by comparison across samples in a dataset, i.e., the differences between data samples are problem-specific, and thus belong to X_S, and the general problem-solving knowledge, denoted as X_G, is common across all samples. For instance, in the example given in Section <ref>, X_G contains the method of fetching the desired characters and performing concatenation, and X_S contains the order of the characters to be fetched and from which string are these characters to be selected.
With this assumption, performing the intervention do(X_G) is equivalent to holding X_G invariant across all data samples so that it can maintain the general problem-solving information consistently while changing X_S. For example, we aim to hold adap_6 invariant across Figure <ref> and Figure <ref>, to avoid it possessing information of X_S, such as the token “AG” and “AN” in Figure <ref>.
Thus, we introduce a causal constraint into the training process to encourage X_G to remain invariant across all data samples. Mathematically, we penalize a larger value of variance on X_G by introducing a regularization term in Equation <ref>
min_θℒ_CE + αℒ_causal
ℒ_causal = 𝔼_l ∈ L'[ Var(X_G) ]
where ℒ_CE is the Cross-Entropy Loss used to train the token prediction accuracy, and α is the weight of our causal regularization term. We apply this causal regularizer on the topmost L' transformer layers, so we take expectation over these layers, where L' ≤ L is a tunable hyper-parameter.
In order to estimate X_G in each of the topmost L' layers in Equation <ref>, we divide the concatenated adapter T_l into two separate pieces, T_l,1 with the length H, and T_l,2 with length M-H. Therefore, we rewrite Equation <ref> as:
[ P_l;T_l,1; T_l,2] ∈ℝ^(K+H+(M-H)) × C
Similar to the vanilla LLaMA-Adapter, this affects the dimension setting of the Key and Value in the self-attention module. Therefore, we rewrite these two modules as Equation <ref> and Equation <ref>.
K_l = [ K_vanilla; K_adap1; K_adap2]
V_l = [ V_vanilla; V_adap1; V_adap2]
Then, instead of applying the softmax function on the three components independently, we first apply the softmax function on the two original components and multiply with the gating module introduced in the vanilla LLaMA-Adapter, then separate the score matrices into three pieces. Therefore, we have Equation <ref>.
S_l^g = [ S_vanilla; S_adap1; S_adap2]
These operations divide the adapter architecture into two segments. Then we treat these two segments as X_G and X_S respectively, enabling us to impose distinct constraints on each of them. In particular, we treat T_l,1 with length H as the section controlling the general problem-solving information X_G. Therefore, X_G can be estimated by Equation <ref>.
X_G≈ S_adap1· V_adap1
Finally, we aggregate this quantity in each of the topmost L' layers and take expectation to form the causal regularizer as introduced in Equation <ref>. The architecture of our method is illustrated in Figure <ref>. The modules involved in the calculation of ℒ_causal are coloured in dark red.
§ EXPERIMENT
§.§ Experimental Settings
We build our method by fine-tuning LLaMA 7B model <cit.>, thus all the parameters related to dimensions and layers remain unchanged, such as the number of transformer layers is 32, and each transformer layer has 32 attention heads. Also, the feature dimension is 128 for each attention head, thus the total feature dimension is 4096. We train the model with a maximum sequence length of 256, and use AdamW for optimization with a learning rate equal to 1e-3. All the models are fine-tuned for 5 epochs with a batch size of 4 for a fair comparison.
In terms of the parameters introduced by vanilla LLaMA-Adapter, we set L=20 and M=10, which means we fine-tune the top 20 transformer layers by appending an adapter prompt of length 10 on each of them. For the parameters H and α introduced by our method, we set H as 2 and α as 1 in all experiments. The parameter L' is data-dependent, and we use 20 for Letter Concatenation, 10 for Date Understanding, 3 for AddSub and Math10k, and 1 for Math401. All other settings, if not specified here, remain the same as in <cit.>.
§.§ Tasks for Evaluation
We evaluate the performance of our method by three types of reasoning tasks:
Symbolic Reasoning We construct a more challenging version of the last letter concatenation problem in <cit.> because the models could almost perfectly solve the problems if the models are fine-tuned with it. Therefore, we ask the model to perform second last letter concatenation, such as Take the second last letters of the words in “Lady Gaga" and concatenate them.
Commonsense Reasoning We test the models with Date Understanding data <cit.>, where each data sample asks a multiple-choice question such as If today is Jan 1, 2023, what day is tomorrow in MM/DD/YYYY?
Arithmetic Reasoning We test the models on three datasets, Math401 <cit.>, which comprises basic arithmetic questions such as 1+2=?, AddSub <cit.> and Math10k <cit.>, both comprises math word questions such as Tom found 7 seashells but 4 were broken . How many unbroken seashells did Tom find?.
§.§ Baselines and Comparison Methods
We compare our method with other methods from three perspectives to conduct a comprehensive comparison:
1) We compare our method with the vanilla LLaMA-Adapter <cit.>. Since we build our method based on LLaMA-Adapter, this comparison allows us to understand the direct impact of implementing our method. All common settings between the two methods such as parameters are kept the same to ensure a fair comparison. The results of this comparison is presented in the bottom block of Table <ref>, and we highlight the margin achieved by our method in green.
2) We compare our method with the other parameter-efficient fine-tuning (PEFT) methods, as listed in the middle block of Table <ref>. We apply these methods on LLaMA 7B, and the results are obtained with the library and hyper-parameters provided by <cit.>. We present the results and the number of learnable parameters allowing us to compare our method with the baseline methods in terms of both effectiveness and efficiency.
3) We compare our method with several pre-trained prompt-following models with the size of 7B, as listed in the top block of Table <ref>. These models do not lie in the domain of PEFT and thus are not directly comparable to our method. They are either obtained by full fine-tuning or pre-trained with massive conversational data. We compare our method with these models to investigate their performances on the reasoning tasks and evaluate if task-specific fine-tuning is necessary to achieve satisfactory results.
§.§ Overall Results
The results are presented in Table <ref>, where the numbers denote the accuracies the methods achieve on each dataset. While comparing our method with the three types of baselines outlined above, our findings also fall into three aspects:
1) Compared with LLaMA-Adapter: Our method consistently outperforms LLaMA-Adapter by a considerable margin on all datasets, as highlighted in green in Table <ref>. Since all the common settings of the two methods remain the same, the results directly demonstrate the impact of our causal method.
2) Compared with the other PEFT methods: We found that while the vanilla LLaMA-Adapter does not always outperform the baseline methods, our method, in contrast, achieves either the highest or the second highest score across all datasets. Even though a few methods may perform better than our method on some particular datasets, it is worth noting that our method has only 1.2M learnable parameters, which is the least among all methods. In summary, our method achieves better or comparable results with other PEFT methods, with much less learnable parameters.
3) Compared with pre-trained models: We found that the performance of pre-trained models is generally not satisfactory compared with the PEFT methods. While these models achieve fair performances on some datasets, they face significant challenges in the LConcat task. Notably, it was observed that none of the pre-trained models under consideration could accurately respond to the Letter Concatenation questions. To ensure this phenomenon is not due to the bias in our prompt, we endeavoured to rephrase the questions in LConcat, however, the models consistently exhibited an inability to comprehend the prompts and frequently provided irrelevant or meaningless responses. We speculate that this is due to the insufficient inclusion of training data of this specific nature during the model's fine-tuning phases.
Summary Our experiments suggest that fine-tuning on specific tasks is necessary to achieve satisfactory results. And, among the Parameter-Efficient Fine-Tuning methods, our method achieves better or comparable results with much less learnable parameters and computational resources.
§.§ Effects of New Parameters
To further investigate the mechanism of our method, we study the impact of parameters introduced by our method, namely, the length H of adaption prompts to be treated as X_G, the weight α of the regularization term ℒ_causal, and the number of layers L' to be used to calculate ℒ_causal.
Choice of H and α We visualize the effect of H and α on the Letter Concatenation dataset in Figure <ref> - <ref>, where the x-axis denotes the value of the parameters, and the y-axis denotes the accuracy obtained by the model. Similar trends can be observed in both charts that increasing the value of H and α can improve the performance of the model, but excessive values can be detrimental. This aligns with our intuition. For H, if a substantial fraction of the adapter remains fixed as X_G, then only a limited part of the adapter could be left to address X_S, which compromises its efficacy in managing problem-specific information. For α, if a large weight is employed for ℒ_causal, the module to handle X_G might remain constant and cannot encode any information.
Choice of L' We found the optimal choice of L' is data-dependent. On datasets like Letter Concatenation, where all the prompts follow the same format, a larger L' is beneficial to the performance. In contrast, on datasets like AddSub, where the questions are not necessarily in the same template, a smaller L' is preferable. This is intuitively reasonable, because for those datasets where the prompts are close enough in the first place, encouraging the model to extract X_G from the bottom layers grants us more control over the reasoning process. In contrast, for those datasets where the prompts are not sufficiently close, X_G can only be extracted and controlled when the representations have been aggregated to a certain level. In that case, a large L' would limit the model's potential for aggregating the high-level information.
§.§ Further Discussions
Applicable scenarios
We illustrate the motivation and idea of our method in Section <ref>. However, it is worth noting that our method is not limited to the case of the same pattern questions. Instead, prompts in different formats also benefit from our method. As demonstrated in Section <ref>, our method benefits a wide range of reasoning tasks with various datasets. This is because we encourage the model to extract the “approach” of solving problems. In other words, as long as a prompt involves reasoning, there will be some problem-solving skills (X_G), and our method is applicable to the scenario. For example, in date understanding and math word questions, where the prompts vary significantly, our method still benefits the performance as illustrated in Table <ref>, because we encourage the model to extract the high-level knowledge, such as the meaning of “tomorrow”, “end of the month” or the math operations such as “Add”, “Subtract”, and keep these problem-solving skills invariance across all data samples. In contrast, our method does not apply to the general Q&A questions, such as Tell me about Alpaca, because these questions do not require reasoning capabilities and there is no “approach” to answer these questions.
Few-shot experiments Few-shot prompt method such as Chain-of-Thought (COT) <cit.> is known to be useful on large models like ChatGPT/GPT4, but it does not apply to PEFT methods, so we did not include these experiments in our paper. To elaborate, COT works well on ChatGPT/GPT4 because those models are fine-tuned by a massive amount of prompt-answer pairs with one-shot examples, enabling the model to utilize one-shot information effectively. In contrast, our method fine-tunes a non-prompt-following LLMs (LLaMA) with task-specific data aiming for improved performance on the task. Since the data does not contain any one-shot prompts, the model will not be able to utilize the one-shot information. As a matter of fact, our experiments reveal that COT is even harmful to the result in such cases.
Finetuning a prompt-following model We also conduct experiments to apply our method on prompt-following models such as Alpaca. As a result, it achieves an accuracy of 75.3 on LConcat, and 79.8 on Date Understanding datasets, which is not comparable to the result we achieved using the original non-prompt following LLaMA. We speculate this is because such instruction-tuned LLMs (such as Alpaca/Vicuna) are also based on the original foundation model such as LLaMA, and it has been fine-tuned with the data that are not closely related to our downstream tasks, thus dropping some information relevant to our task, thus harming the performance. Therefore, we empirically conclude that it would be a better practice to fine-tune the foundation model, rather than an existing instruction-following model.
§ RELATED WORKS
Reasoning in LLMs. Instruction-following LLMs have been employed on many tasks involving reasoning recently, including but not limited to Mathematics, Logical Reasoning, and Symbolic Reasoning <cit.>. Many of these methods investigate LLM's reasoning capabilities from its output using Chain-of-Thought prompting strategy <cit.>. Apart from these, some works build thinking pipelines <cit.> to achieve the final goal step-by-step.
Causal Inference in Machine Learning.
Causal inference has been applied to many vision tasks in recent years such as image recognition <cit.> and Image Generation <cit.>. These works first construct causal graphs to explain the task, then use causal inference methods to eliminate the spurious association and improve the performance of the models. Besides, causal inference techniques are also used in Representation Learning <cit.>.
§.§ Relationships with our method
Existing works typically discuss LLMs' reasoning abilities based on their input and output <cit.>. However, we argue that solving causality-related tasks or providing the thinking processes by words do not necessarily indicate the model's reasoning capability, because simply mimicking token distribution could achieve equivalent outcomes. Our work, in contrast, discusses the reasoning capabilities of LLMs in the level of attention and representation, thus offering a novel perspective on this matter. Besides, the novelty of our method also involves applying causality in LLM fine-tuning, which was rarely discussed in earlier literature.
§ CONCLUSION
In this paper, we first investigated the reasoning capabilities of the prompt-following LLMs by visualizing the attention values in the thinking process, and empirically suggest that these models lack genuine causal reasoning capabilities. Then, we formulate the reasoning process of LLMs into a causal inference framework to explain the issues observed in the visualization. Finally, we propose Deconfounded Causal Adaptation (DCA), a causal fine-tuning method to improve the model's reasoning capability. Experiments show our method effectively enhances the reasoning capabilities of the models and outperforms baseline methods consistently. Besides, we also discuss the applicable scenarios of our method and analyze the effect of our method with different settings thoroughly.
splncs04
|
http://arxiv.org/abs/2409.03729v1 | 20240905173510 | SR-CLD: spatially-resolved chord length distributions for statistical description, visualization, and alignment of non-uniform microstructures | [
"Sheila E. Whitman",
"Marat I. Latypov"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
[az-am]Graduate Interdisciplinary Program in Applied Mathematics, University of Arizona, Tucson, AZ 85721, USA
[az-mse]Department of Materials Science and Engineering, University of
Arizona, Tucson, AZ 85721, USA
az-am]Sheila E. Whitman
az-am,az-mse]Marat I. Latypovcor1
[cor1]corresponding author
[email protected]
§ ABSTRACT
This study introduces the calculation of spatially-resolved chord length distribution (SR-CLD) as an efficient approach for quantifying and visualizing non-uniform microstructures in heterogeneous materials. SR-CLD enables detailed analysis of spatial variation of microstructure constituent sizes in different directions that can be overlooked with traditional descriptions. We present the calculation of SR-CLD using efficient scan-line algorithm that counts pixels in constituents along pixel rows or columns of microstructure images for detailed, high-resolution SR-CLD maps. We demonstrate the application of SR-CLD in two case studies: one on synthetic polycrystalline microstructures with known and intentionally created uniform and gradient spatial distributions of grain size; and one on experimental images of two-phase microstructures of additively manufactured Ti alloys with significant spatially non-uniform distributions of laths of one of the phases. Additionally, we present how SR-CLDs can enable automated, computationally efficient, and robust alignment of large sets of images for merging into accurate composite images of large microstructure areas.
Microstructure, Chord length distribution, Heterogeneous materials, Grain size.
§ INTRODUCTION
Process–microstructure–property relationships of materials serve as the cornerstone of materials science and engineering. Efficient materials design requires not only qualitative but also quantitative understanding of these relationships. Quantitative process–microstructure–property relationships require rigorous description of the materials microstructure. Rigorous quantitative description of microstructures is not trivial, especially in structural metals and alloys that have a rich variety of microstructure constituents of interest at multiple length scales <cit.>. In many structural alloys, phases and grains are the constituents of special interest as they play a decisive role in a suite of engineering properties (e.g., stiffness, strength, fatigue, toughness) <cit.>.
Microstructures are typically described by the statistics of size metrics of constituents such as areas, equivalent diameters, and intercepts (chords). Areas are often considered in microstructure maps where individual constituents can be clearly isolated: e.g., electron back-scattered diffraction (EBSD) maps <cit.> or segmented optical/electron microscopy images. It is common practice to convert areas of constituents (especially grains) into equivalent diameters, e.g., diameters of circles of the same area as the constituent <cit.>. The equivalent diameter is often a more preferred geometric descriptor than the area even for irregularly shaped grains because it is intuitive and compatible with widely used property models, e.g., the Hall–Petch model relating the yield strength to the average grain diameter of polycrystalline metals and alloys <cit.>. For significantly non-equiaxed microstructures (e.g., in rolled alloys with elongated grains), equivalent ellipses can be considered instead of circles <cit.>. The ellipse representation allows analyzing distributions of major and minor diameters as well as aspect ratios and inclination angles of the major axes <cit.>, which provides insights into not only size but also, to some extent, morphology of the constituents and their geometric orientations.
The intercept, or chord, is another size metric used in statistical microstructure analysis for both equiaxed and non-equiaxed constituents. A chord is a line segment completely contained within a microstructure constituent (see l_i in <Ref>a). The advantage of the chord is that it can be defined and measured for constituents with arbitrarily complex shapes without the underlying approximation to a circle, ellipse, or any other idealized shape. Chords are further advantageous in the context of microstructure–property relationships because chord lengths are directly relevant to transport properties in heterogeneous materials <cit.> and properties dictated by free paths in the microstructure, e.g., slip resistance related to the dislocation free path between grain boundaries <cit.>.
In standardized practice, chords are sampled using test lines or other simple test objects (e.g., circles) <cit.>. To this end, one randomly overlays test lines (circles, or other objects) with the microstructure map and then identifies intersections with the boundaries of the constituents of interest. Chord lengths can be then estimated from the number of intersections per test line of a known length <cit.>. Upon sampling, the mean chord length value can be obtained and reported either directly or, in the case of grain size analysis, converted to an average size using standardized tables <cit.>. Some aspects of these standardized protocols of chord length analysis arise from the historical origins of manual measurements in non-digital micrographs.
The emergence of modern tools of image processing, computational statistics, and visualization, as well as a shift towards digital microstructure data obtained by most of the current characterization instruments give rise to not only automation of measurements but also new, more detailed approaches to microstructure analysis. First of all, for digital microstructures, chord lengths can be calculated directly, rather than estimated from a number of intersection points. Second, automated and direct calculation allows obtaining chord lengths from a large number of test lines overlaid with the microstructure, as opposed to sampling with a few random test lines. The advantage is that systematic sampling with numerous parallel lines allows resolving the chord lengths and their distributions in different directions. For example, Lehto et al. <cit.> presented a local grain size analysis method that calculates chord lengths in four directions (0, 45, 90, and 135) for each point in polycrystalline microstructures of welded steel. Latypov et al. <cit.> presented high-resolution angularly-resolved chord length distributions (CLDs) for EBSD grain maps of polycrystals . Turner et al. <cit.> developed a computational method of calculation of directional CLDs for 3D voxelized microstructures.
Literature inspection shows that standardized and commonly used protocols of microstructure analysis focus on statistical summaries of size metrics (e.g., mean values) that assume idealized shapes (e.g., equivalent diameters) or calculated using methods grounded to originally manual measurements (e.g., number of intercepts with random test lines). A shift towards digital microstructure quantification shows the potential for more systematic and granular analyses with automated computational approaches, as seen in advances in resolving the chord lengths and their distributions in different directions <cit.>. Yet, even with recent advances, the existing approaches overlook spatial variation of size distributions of the constituents, which is only adequate for materials with spatially uniform microstructures. In this context, there is a growing need for new methods that can account for spatial variations. The need comes from the emergence of new classes of materials with intentionally designed non-uniform microstructures (e.g., heterostructured <cit.>, architectured <cit.>, gradient <cit.>, or lithomimetic <cit.> materials) and the corresponding new (as well as conventional) processing methods (friction stir welding <cit.>, additive manufacturing <cit.>, severe plastic deformation <cit.>). Related work in this direction includes the calculation of moving averages of the grain size <cit.> or the second-phase particle thickness <cit.> along principal directions in the microstructure. Building on the prior work of moving averages and directionally resolved CLDs, we present a method of the calculation and visualization of high-resolution spatially-resolved chord length distribution (SR-CLD). Our method is developed to capture spatial variations of the constituent size distributions along selected directions in diverse types of microstructures. In <Ref>, we describe the details of our SR-CLD method and then demonstrate its use in case studies in <Ref>.
§ SPATIALLY-RESOLVED CHORD LENGTH DISTRIBUTIONS
For calculation of individual chord lengths, we adopt the scan-line method proposed by Turner et al. <cit.>. The method is suitable for digital microstructures defined on (pixel) grids, where microstructure constituents have pixel labels distinct from either boundaries or neighboring constituents. Given a grid of pixel values uniquely defining constituents of interest, we calculate horizontal (vertical) chord lengths by iterative pixel count for every row (column) of pixels on the microstructure grid. Consider the calculation of horizontal chord lengths in an example microstructure shown in <Ref>(a). For every row, starting from one end, we count pixels inside a microstructure constituent with the same label in a continuous stretch. While scanning pixels along the row, every time a new label or a boundary is encountered, we stop the pixel count, record the chord length (in pixels), and then restart the pixel count for the next constituent. We repeat the calculation for every row in the image and collect row-specific chord length datasets. For each dataset, we then bin the chord lengths and calculate row-specific (horizontal) distributions as shown with a couple of scan lines in <Ref>(b). If the microstructure map has K rows, the procedure would result in K sets of chord lengths and K CLDs. The calculation of chord lengths and their distributions in the vertical direction is identical to that for horizontal CLDs with the only difference being that pixels are scanned vertically along pixel columns instead of rows.
We represent row- or column-specific distributions (SR-CLDs) by probabilities, P_i^(k), of finding a chord length within a small interval centered at a discrete value, l_i, in a kth row or column of the microstructure calculated as follows:
P_i^(k) = N_i^(k) l_i∑^B_i = 1N_i^(k) l_i,
where the index i enumerates bins from 1 to B, N_i denotes the number of chords within the interval of the ith bin with its center corresponding to the chord length l_i.
The described calculation results in SR-CLDs numerically represented by two-dimensional probability arrays (matrices) of size K× B. For intuitive interpretation of SR-CLDs, we propose a visualization of SR-CLD arrays as heat maps, where one axis represents the spatial coordinate (along which the spatial variation of CLDs is captured), the other axis represents the chord length, and the color depicts the CLD probability. Such heat maps can be thought of as CLDs sequentially calculated across a microstructure, stacked together along a spatial axis with the probability axis represented by color (<Ref>(c)).
As seen in the simple example in <Ref> and in subsequent case studies, such visualization of SR-CLDs allows quick and intuitive visual assessment of the spatial uniformity (or lack thereof) in the microstructure in terms of constituent sizes.
Note that, if the chord lengths are calculated and aggregated into a single dataset from all pixel rows or columns, the calculation would result in an overall CLD for a given direction, as considered in previous studies <cit.>. <Ref> illustrates advantages of SR-CLDs compared to overall CLDs. Overall CLDs calculated in the horizontal direction for two synthetic microstructures are identical although the microstructures are clearly different with one being non-uniform (<Ref>(a)). It is the consideration of chord lengths along individual rows/columns followed by row-/column-wise calculation of distributions that spatially resolves the CLD to capture the spatial variation of microstructure constituents as seen in <Ref>(c). Mathematically, the spatial resolution of CLDs is signified by the index k that enumerates rows for horizontal CLDs or columns for vertical CLDs. In this study, we discuss the calculation and analysis of SR-CLDs along two principal directions (horizontal and vertical), however, this method can be generalized to any direction in a microstructure.
The SR-CLD values in the numerical arrays and their corresponding heat map visualizations depend on the binning of chord lengths selected in the CLD calculation (<Ref>). The choice of the binning, and specifically the number of bins for calculating a distribution is not trivial; and much research has been dedicated on determining the optimal number of bins that convey a representative distribution of a given dataset <cit.>. Some common methods of determining the number of bins include the square root rule, Sturges' formula <cit.>, Scott's normal reference rule <cit.>, Freedman–Diaconis Rule <cit.>, Rice's rule <cit.>, and Doane's formula <cit.>. In this work, we explored all these methods and chose Doane's formula for SR-CLD calculations due to the large number of chords and skewness of the distribution present in most considered cases.
§ CASE STUDIES
We demonstrate the application of our methodology in two case studies: (i) synthetic polycrystalline microstructures and (ii) experimental two-phase microstructures. The first case study serves as a proof of concept in which SR-CLD describes a known and intentionally created gradient in the grain size in synthetically generated polycrystals. The second case study analyzes non-uniform microstructures experimentally obtained in additively manufactured Ti alloys. Beyond these case studies, we present an additional application of SR-CLD for efficient and effective automated alignment of a large number of microstructure images.
§.§ Synthetic polycrystalline microstructures
To evaluate the SR-CLD approach on a microstructure with known non-uniform spatial distributions of the constituent size, we generated synthetic 2D polycrystals using the open-source software Neper <cit.>. We generated two representative polycrystals (<Ref>(a)): (i) a polycrystal with uniform grain size and (ii) a polycrystal with a monotonically decreasing grain size along the vertical direction (the y axis in <Ref>). To this end, we used different settings of grain seeding available in Neper: random for the uniform polycrystal and biased for the polycrystal with a grain size gradient. The biased seeding aimed for a 300 increase in grain size from top to bottom ends of the microstructure. Both polycrystals contained 1108 grains and similar overall grain size distribution and mean grain size (<Ref>(c,d)).
To quantitatively compare spatial variations in these polycrystals using the SR-CLD approach, we worked with binary images of the polycrystals exported from Neper. In the binary images of pixel size 284×786, grain interiors were represented with white and grain boundaries with black pixels. By counting uninterrupted stretches of white pixels in each row, we digitally measured horizontal chords and computed K=284 chord length distributions (CLDs) for all of 284 pixel rows using <Ref> with B=39 bins, as determined from Doane's formula. <Ref>(b) displays the results of this SR-CLD calculation visualized as heat maps.
The SR-CLD clearly captures the gradient in the grain size when it is present: the SR-CLD for the gradient polycrystal shows a shift of the prevalent chord length from about 4 to 18 (<Ref>(b)). This trend is confirmed with the moving average chord length also shown in <Ref>(c) alongside the SR-CLD maps. Unlike the mean chord length however, the SR-CLD map additionally shows the variance in the chord lengths for each vertical location. The top part of the microstructure has small grains of consistent size, as seen from a high probability in the narrow range of chord lengths (red bands in the SR-CLD map, <Ref>(b)). On the other hand, the probability is lower and spread over a wider range of chord lengths for the bottom part of the gradient microstructure containing large grains with a greater variety of horizontal chord lengths. In contrast to the gradient polycrystal, the microstructure generated with random seeding is characterized by a consistent mean chord length (<Ref>(c)) and a SR-CLD with no trend: the chord length probability is in a consistent range centered around 9 (<Ref>(b)).
§.§ Two-phase microstructures of titanium alloys
The second case study demonstrates the application of SR-CLD on real microstructures to describe spatial variation of phase sizes. Specifically, we quantify the spatial variation of the α phase in two-phase microstructures of two dissimilar titanium alloys additively manufactured and experimentally characterized by Kennedy et al. <cit.>. The authors co-deposited Ti5553 and Ti64 alloys using a wire-arc additive manufacturing process, which results in spatial variations of the composition, microstructure, and thus properties (e.g., strength and damage tolerance). The spatial microstructure variation is primarily manifested in a variation of the lath size of the α phase. To demonstrate our approach for these materials, we processed the raw experimental images of the phases into the binary format consistent with the pixel count method, aligned the individual images into large composite images and calculated SR-CLD for the entire composite images.
Raw data and image pre-processing. Raw data published by Kennedy et al. <cit.> contain over 900 high-resolution scanning electron microscope images for two additively manufactured materials: Ti5553-on-Ti64 and Ti64-on-Ti5553. To segment the raw images, we first applied a Gaussian filter (with radius of 5px) and then applied a threshold optimally selected using Yen's method <cit.> to obtain binary images whose white pixels represent the α phase (laths) of interest. The binary images were passed through erosion and dilation filters to clean up the remaining segmentation noise <cit.>. Hundreds of the individual segmented images were then aligned and merged into large composite images: one for Ti5553-on-Ti64 and one for Ti64-on-Ti5553 samples (except a couple of stained images at the top of the analyzed regions).
SR-CLD calculation. We calculated SR-CLDs for both composite images despite their very large size (288688×3897 each) following merging. Horizontal chord lengths were measured for the α phase by counting the corresponding white pixels at each pixel row. From the measured chord lengths, we estimated 288688 CLDs using <Ref> with B=55 bins, also for all 288688 pixel rows. While we obtained the SR-CLD maps for the entire composite images, they are so large that we can practically present the results only for microstructure subregions of approximately 70 along the y axis – the direction of the microstructure variation (<Ref>).
<Ref> shows two subregions of each processed microstructure and their corresponding SR-CLD maps and moving average chord length curves for the two studied materials.
These subregions were selected to represent a variety of microstructure transitions that were present in the samples. The first two subregions include monotonic transitions from fine to coarse α laths (<Ref>(a,b)), while the other two subregions feature zones of coarse α laths surrounded by fine α-lath microstructures (<Ref>(c,d)). Some of these microstructure transitions are captured by the moving average chord length: e.g., coarse zone in <Ref>(c,d). At the same time, the mean chord length curve has spurious peaks for the subregion shown in <Ref>(a), which can mislead to a conclusion of the presence of much larger α laths compared to the rest of the microstructure. SR-CLD maps present a richer description of these non-uniform microstructures. Serving as visual representations of the distributions rather than only the mean values, SR-CLD maps show not only most probable chord lengths for each zone but also their consistency and variance in chord lengths. The high probability of short chord lengths in the zones of fine α laths highlights the consistency of chords in a narrow range of lengths (<Ref>(a,b,d)). At the same time, the probability is distributed over a wide range of lengths for the zones with coarse laths (<Ref>(a,c,d)). Partially, the lack of clear SR-CLD peaks in zones with coarse α laths is associated with the fewer chords present in those zones. This is because the image of (approximately) constant width captures many fine laths and relatively few coarse laths. The SR-CLD maps conveniently indicate the insufficient number of chords for conclusive statistics with low probability spread over the entire range of chord lengths analyzed for these microstructure regions.
§.§ Alignment of experimental images for large microstructures
Composite images of large microstructure regions consist of multiple images of adjacent fields of view taken individually with a characterization instrument. In practice, seldom are images of the adjacent fields of view perfectly aligned. Analyses of large microstructures thus require an additional step of digital alignment followed by merging into large composite images, as was the case for the Ti images discussed in <Ref>. Manual and semi-manual methods (e.g., <cit.>) are impractical for alignment of more than a few individual images. For example, methods that require any manual input would be challenging and extremely time-consuming to align and merge more than 900 images into the two composite microstructure images that were part of the Ti dataset. In this study, we found that SR-CLDs can be used for efficient, fully-automated, and effective alignment of numerous microstructure images.
We propose an SR-CLD-based method that aligns a pair of images by minimizing the difference between SR-CLDs calculated in the two images in horizontal or vertical direction in their overlapping region. We introduce scaled Euclidean distance as a quantitative metric of the difference between two SR-CLDs. For SR-CLDs, P_a and P_b, calculated for two images a and b with a vertical/horizontal overlap of K pixel rows/columns, our scaled Euclidean distance is expressed as:
d_K(P_a,P_b) = 1/K√(∑_k=0^K (P^(k)_a - P^(k)_b)^2),
Note that, compared to the conventional Euclidean distance between two high-dimensional vectors, d_K quantity has an additional factor of 1/K, which normalizes the difference in two SR-CLDs by the amount of overlap between the two images, K. We introduce this factor because minimizing just the absolute value of the Euclidean distance would always favor a minimal overlap of two images, which does not necessarily correspond to the best alignment. Thus defined, scaled Euclidean distance serves as a quantitative alignment “error” between two images in a given direction. The process of alignment can be then formulated as an optimization problem that seeks the optimal overlap, K_opt, that minimizes the alignment error, d_K:
K_opt = min_K d_K(P_a,P_b)
Since the SR-CLD calculation is computationally efficient (see <Ref>), K_opt can be found by iteratively adjusting the relative position of images by single-pixel shifts. Starting with an exaggerated overlap such that K ≫ K_opt, one of the images is shifted (pixel-wise) in the direction of separation (no overlap) of the two images (<Ref>(a)). The analysis of the alignment error calculated at different overlaps K during iterative shifting (<Ref>(c)) provides K_opt that minimizes d_K(P_a,P_b). This procedure of minimizing the alignment error is suitable for vertical, horizontal, and combined alignment. For combined alignment in both directions, the images can be aligned first in one direction followed by alignment in the other direction.
The alignment procedure described above can be used for alignment of very large series of images by repeating the steps of pairwise alignment and merging. We tested this methodology for automated alignment and merging of more than 900 images from the Ti dataset discussed above. First, the SR-CLDs were calculated for two images at a time in the vertical direction. Using these SR-CLDs, the images were aligned vertically by finding the optimal vertical overlap that minimizes the alignment error (<Ref>). Then the SR-CLDs in horizontal direction were calculated only for the overlapping rows of the two images, followed by minimizing their difference, d_K.
Once two images were aligned in both directions, they were merged into a single image. The procedure was repeated for the pairwise alignment of the merged image with a next individual image. We compared this SR-CLD approach with alignment by minimizing the Euclidean distance between pixel values previously considered in literature <cit.>. We found that the SR-CLD approach leads to better alignment (<Ref>(b,d)) because the experimental images had significant differences in image contrast, which results in noise in pixel values in the overlapped regions, whether raw or after processing and segmentation (<Ref>(b–d)). Since SR-CLDs are statistical descriptions, they are less sensitive to raw image imperfections, differences in contrast due to surface lightning, and noise or errors in segmentation <cit.>, which results in more robust alignment (<Ref>(d)).
§ DISCUSSION
In this work, we introduced a new approach to analyzing and quantifying non-uniform microstructures. With two case studies, we demonstrated that SR-CLDs provide insights into spatial microstructure variations inaccessible with traditional methods of microstructure descriptions. For example, two clearly different polycrystalline microstructures studied in <Ref> had similar overall grain size distributions and mean grain sizes (<Ref>(a,d)). SR-CLD captured the differences in those microstructure both numerically and visually via SR-CLD maps (<Ref>(b)). Capturing spatial differences is important because microstructure description is a foundation for establishing process–microstructure–property relationships. We can expect that a microstructure with a significant gradient is a result of a particular process and will have properties different from those of a uniform microstructure. Yet, microstructure descriptions that focus on overall size distributions or mean values without spatial sensitivity will fail to reflect such differences. The SR-CLD approach that resolves spatial microstructure variations can therefore serve as a statistically rigorous microstructure description for quantitative process–microstructure–property relationships in materials with significantly non-uniform microstructures. For example, SR-CLDs or their reduced-order representations (e.g., from principal component analysis <cit.>) could be used as microstructure “features” for machine learning, as previously shown with traditional (non-spatially-resolved) CLDs <cit.>.
While our case studies demonstrated SR-CLDs for describing grains and phases, the presented approach is flexible for analyzing any other microstructure constituents in a wide variety of materials and data modalities. Since our chord length calculation is based on simple pixel counting, any microstructure map that contains constituent labels at pixels could be used for SR-CLD calculations. This includes electron back-scattered diffraction (EBSD) maps that contain phase IDs (for multiphase microstructures) and grain IDs at each EBSD “pixel”, which can be used for digital chord measurements for the SR-CLD approach. Segmented microstructure images are another wide class of microstructure data that can benefit from the presented SR-CLD calculations. An advantage of leveraging SR-CLDs for segmented images is that, CLDs are tolerant to segmentation errors inevitable in experimentally obtained microstructures <cit.>.
In addition to the description of non-uniform microstructures, we demonstrated the application of SR-CLD for image alignment (<Ref>). Using Euclidean distance between SR-CLD in the overlapped regions of image pairs, we successfully aligned and merged over 900 images into two large composite images. Similar to the microstructure description, this SR-CLD-based alignment is robust even in the presence of contrast and segmentation differences in the images (<Ref>(d)). Like microstructure description, alignment based on SR-CLD can be used for a rich variety of microstructures, microstructure constituents, and data modalities, not limited to images of phases from the scanning electron microscope considered in <Ref>.
Based on simple pixel counting, the presented approach of SR-CLD calculations is computationally efficient with minimum CPU and memory requirements. Calculation of SR-CLD for a typical high-resolution 12000×4000 image (shown in <Ref>) takes only about 23, and 6.3 for a very large (288688×3897) composite image on an average consumer-grade laptop (MacBook Air M1 with 16 RAM). Since the calculation of individual location-specific CLD for a pixel row/column is independent, the SR-CLD calculation can be easily parallelized if needed, e.g., for extremely large microstructure maps. To facilitate adoption of the approach by the community, we made a Python code for SR-CLD calculations available on GitHub (link below).
§ CONCLUSION
In this paper, we presented calculation of spatially-resolved chord length distribution (SR-CLD) for statistical description of non-uniform microstructures. With two case studies, we demonstrated the application of our SR-CLD approach to different microstructures with grains and phases as microstructure constituents of interest. The results show that SR-CLDs capture spatial variations of constituent sizes that would be inaccessible with traditional microstructure descriptions focusing on overall size distributions or their moments. SR-CLD captures microstructure uniformity (or lack thereof) both visually (via proposed SR-CLD maps) and numerically and are therefore suitable both for intuitive visual assessment of the microstructure and for quantitative relationships between microstructure, properties, and processing. We further demonstrated that SR-CLDs can be used for robust alignment of numerous microstructure images into large composite images even in the presence of contrast and other differences in images that need to be aligned. With a simple pixel counting algorithm as a basis, SR-CLD calculations require very modest computational resources and can be therefore calculated for description or alignment of very large images on a typical laptop.
§ ACKNOWLEDGEMENTS
SEW acknowledges the support by the National Science Foundation (NSF) Graduate Research Fellowship Program under Grant No. DGE-2137419. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NSF.
§ DATA AVAILABILITY
The codes for SR-CLD calculations and SR-CLD-based alignment are available at <https://github.com/materials-informatics-az/SR-CLD>.
|
http://arxiv.org/abs/2409.02313v1 | 20240903215613 | On the Benefits of Memory for Modeling Time-Dependent PDEs | [
"Ricardo Buitrago Ruiz",
"Tanya Marwah",
"Albert Gu",
"Andrej Risteski"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
Topological characterization of modified Kane-Mele-Rashba models via local spin Chern marker
Tarik P. Cysne
September 9, 2024 – Version 1.0
============================================================================================
§ ABSTRACT
Data-driven techniques
have emerged as a promising alternative to traditional numerical methods
for solving partial differential equations (PDEs).
These techniques frequently offer a better trade-off between computational cost and accuracy for many PDE families of interest. For time-dependent PDEs, existing methodologies typically treat PDEs as Markovian systems, i.e.,
the evolution of the system only depends on the “current state”, and not the past states. However, distortion of the input signals — e.g., due to discretization or low-pass filtering — can render the evolution of the
distorted
signals non-Markovian.
In this work, motivated by the Mori-Zwanzig theory of model reduction,
we investigate the impact of architectures with memory
for modeling PDEs: that is, when past states are explicitly used to predict the future.
We introduce Memory Neural Operator (MemNO),
a network based on the recent SSM architectures and Fourier Neural Operator (FNO).
We empirically demonstrate on a variety of PDE families of interest that when the input is given on a low-resolution grid, MemNO significantly outperforms the baselines without memory,
achieving more than 6 × less error on unseen PDEs. Via a combination of theory and experiments, we show that the effect of memory is particularly significant when the solution of the PDE has high frequency Fourier components (e.g., low-viscosity fluid dynamics), and it also increases robustness to observation noise.
§ INTRODUCTION
Time-dependent partial differential equations (PDEs)
are central to modeling
various scientific and physical phenomena,
necessitating the design of accurate and computationally efficient solvers.
Recently, data-driven neural network based approaches <cit.> have emerged as an attractive alternative to classic numerical solvers, such as finite element and finite difference methods <cit.>.
Classical approaches are computationally expensive in high dimension, and struggle with PDEs which are very sensitive to initial conditions. Learned approaches can often negotiate these difficulties better, at least for the family of PDEs they are trained on.
One example of a data-driven approach is learning a neural solution operator, which for a time-dependent PDE learns a time
evolution map that predicts the solution of the PDE for future time steps.
The operators are frequently autoregressively parametrized, such that the network predicts future states based on previous ones <cit.>, and the number of past states the model is conditioned on serves as “memory” and is treated as
a tunable hyperparameter.
Recent works <cit.>
suggest that optimal performance across various PDE families
can be achieved by conditioning the models only on the immediate past
state—i.e., treating the system as Markovian—though this is in settings in which the training data is very high-resolution.
In many practical settings, we expect to only observe a part of the system. This could be due to limited resolution
of the measurement devices collecting the data, inherent observational errors in the system, or prohibitive computational difficulty generating high-quality synthetic data.
This can lead to significant information loss, particularly in systems like turbulent flows <cit.> or shock formation in fluid dynamics <cit.>,
where PDEs change abruptly in space and time.
In such situations, classical results from dynamical systems (Mori-Zwanzig theory), suggest that the system becomes strongly non-Markovian.
More precisely, Mori-Zwanzig theory <cit.> is an ansatz to understand the evolution of a subspace of system (e.g., the top k Fourier components). Under certain conditions, this evolution can be divided into a Markovian term (the evolution of the chosen subspace under the PDE), a memory term (which is a weighted sum of the values of all previous iterates in the chosen subspace), and an “unobservable” term, which depends on the values of the initial conditions orthogonal to the chosen subspace.
we first show that if the input data contains
observes only a part of the PDE—for example,
PDE defined on a low-resolution grid, if the data contains only
the top k frequency components of a state—then the effects
of the unobservable part of the system
can be captured by
as a memory of the observed variables in the previous state.
We then show that the dynamics of a PDE
can be divided into
observed and unobserved terms, thus,
the resulting dynamics of the observed variables
contain a memory term which is a weighted sum of all the previous observed states
plus an unobserved noise term. this is a strange description of MZ— and “thus” is a strange word. You want to say that the projection of the dynamics to the observable subspace has a Markovian term + a “correction” factor that also lies in the observed subspace but isn't MarkovianWe also prove that
under certain conditions, the memory term can be arbitrary large.
Therefore,
the resulting dynamics of the system not Markovian, necessitating
the need to maintain some form of a memory of the past states.
In this paper, we study the effects of explicit
memory when deploying neural operators for time-dependent PDEs. By memory
we loosely mean a representation of the previous states
of a PDE.
We focus on PDE families that take the form ∂_t u(x,t) = u(x, t)
where u(x, t): Ω× [0, T] → is a time-dependent function
defined over the domain Ω, and is a (possibly non-linear) operator.
This is a generic form of a time-dependent PDE system and contains many PDE families of interest in different application domains (e.g., heat diffusion, Navier-Stokes, Kuramoto-Sivashinsky, Black-Scholes, Schrödinger equation to name a few).
We introduce Memory Neural Operator (MemNO),
an architecture which combines Fourier neural operator (FNO) <cit.>
and
the S4 architecture <cit.>.
The MemNO architecture can be seen as an adaptation of the FNO <cit.>
architecture where the FNO layers model the spatial dynamics of the PDE
while the S4 layers <cit.>
maintain a compressed memory of the past states.
We choose S4 models over
recurrent architectures like LSTM <cit.>
due to superior performance in modeling long range dependencies <cit.>, ease of training,
and favorable memory and computational scaling with both state dimension
and sequence length.
Through our experiments we
show that for PDEs observed on low resolution grids and/or with observation noise,
MemNO outperforms their Markovian (memoryless) baselines—achieving 6 × less
loss on unseen PDEs. Our contributions are as follows:
* We introduce MemNO, a Memory Neural Operator architecture which uses a combination of FNO layers and S4 layers to model the spatial and temporal dynamics of a PDE. The S4 layers explicitly model memory, by introducing a weighted average over the past (compressed) states.
* Even in relatively simple linear PDEs, we theoretically show the memory term can result in a solution that is (arbitrarily) closer to the correct solution, compared to the Markovian approximation — in particular when the operator describing the PDE “mixes” the observed and unobserved subspace.
* Across several families of one-dimensional and two-dimensional PDEs, we show
that when the input is supplied on a
low-resolution grid, or has additional observation noise, memory based architectures outperform the best performing FNO based baselines by a significant margin.
* Finally, we empirically show that this effect is more pronounced for PDEs which result in solutions with high order
frequency modes, and introduce a metric which indicates
when memory based models will have most impact.
§ RELATED WORK
Data-driven neural solution operators <cit.>
have emerged as the dominant approach for
approximating
PDEs,
given their ability to model multiple families of PDEs at once,
and relatively fast computation at inference time.
Recently, many architectures have been proposed to improve the performance of neural
operators across multiple families of PDEs,
<cit.> designed
the Fourier Neural Operator (FNO), a resolution invariant
architecture
that uses a convolution based integral kernels
evaluated in the Fourier space. <cit.> later
introduced Factorized FNO (FFNO) architecture, that builds upon and improves
the FNO architecture by adding separable spectral layers and residual connections.
Additionally, they perform extensive ablations that point out
that strategies like the Markov
assumption—i.e., predicting a state only from its immediate prior state—is optimal
and outperforms models that use the history of past timesteps as input. <cit.> performed a similar study for long rollouts of the PDE solution and concluded the the optimal performance is indeed achieved under the Markovian assumption.
However, we show that
when there is a loss of information in the observation a PDE,
a model that the uses the history
of past states
outperforms its
Markovian counterpart, often achieving 6× less error on unseen PDEs from the same family.
Our work is motivated by the Mori-Zwanzig formalism <cit.>
which shows that
a partial observation of the current state of the system
can be compensated using memory of past states.
Our work is also inspired by a previous study on the effects of memory
in modeling PDE dynamics by <cit.>. Here the authors
draw parallels to the Mori-Zwanzig equations and LSTM <cit.>
to model the dynamics of the top Fourier components of a time-dependent 1D Kuramoto-Sivashinsky
and 2D shear flow equations, one single PDE at a time.
However, in our work, we study the benefits of memory in neural operator settings, i.e,
we have a single model that learns the dynamics of an entire family of PDE at once.
Furthermore, we use
the S4 state space model architecture <cit.> to model
the temporal dependencies, which in our experiments has better performance and is more stable than LSTMs.
§ PRELIMINARIES
In this section, we introduce several definitions, as well as background on the Mori-Zwanzig formalism as applied to our setting.
§.§ Partial Differential Equations (PDEs)
[Space of square integrable functions]
For integers d, V and an open set Ω⊂ℝ^d, we define L^2(Ω; ℝ^V) as the space of square integrable functions u: Ω→ℝ^V such that ‖ u ‖_L^2≤∞, where ‖ u ‖_L^2 = (∫_Ω‖ u(x) ‖^2_2 dx )^1/2.
[Restriction]
Given a function u: Ω→ℝ^V and a subset A⊂Ω, we denote u _A as the restriction of u to the domain A, i.e. u_A: A →ℝ^V.
The general form the PDEs we consider in this paper will be as follows:
[Time-Dependent PDE]
For an open set Ω⊂ℝ^d and an interval [0,T]⊂ℝ, a Time-Dependent PDE is the following expression:
∂ u/∂ t (t,x) = ℒ[u](t,x), ∀ t ∈[0,T], x ∈Ω,
u(0,x) = u_0(x), ∀ x ∈Ω,
ℬ[u_∂Ω](t) = 0, ∀ t ∈ [0,T]
where ℒ: L^2(Ω; ℝ^V)→ L^2(Ω; ℝ^V) is a differential operator in x which is independent of time, u_0(x)∈ L^2(Ω; ℝ^V) and ℬ is an operator defined on the boundary of ∂Ω, commonly referred as the boundary condition.
Unless otherwise stated, both in the experiments and in the theory we will largely work with periodic boundary conditions:
[Periodic Boundary Conditions]
For Ω = [0,L]^d, we define the periodic boundary conditions as the condition:
u(x_1, ⋯, x_k-1, 0, x_k+1, ⋯ x_d) = u(x_1, ⋯, x_k-1, L, x_k+1, ⋯ x_d)
for all (x_1, ⋯, x_k-1, x_k+1, ⋯, x_L) ∈ [0,L]^d-1 and all k=1, ⋯, d.
Finally, we will frequently talk about a grid of a given resolution:
[Equispaced grid with resolution f]
Let Ω=[0,L]^d. An equispaced grid with resolution f in Ω is the following set 𝒮⊂ℝ^d:
𝒮 = {.(i_1 L/f, ⋯, i_k L/f) | 0 ≤ i_k ≤ f-1 for 1≤ k ≤ d }.
We will also denote by |𝒮| the number of points in 𝒮.
§.§ Mori-Zwanzig
The Mori-Zwanzig formalism <cit.> deals with cases where an equation is known for a full system, yet only a part of it is observed. It leverages the knowledge of past observed states of a system to compensates for the loss of information that arises from the partial observation. In our paper, partial observation can refer to observing the solution at a discretized grid in space or only observing the Fourier modes up to a critical frequency. In particular, the Mori-Zwanzig formalism in the context of time-dependent PDEs is well-known in the Physics literature as the Nakajima–Zwanzig equation (<cit.>)
Now, we will apply the Nakajima–Zwanzig equation to our setting. Assume we have a PDE as in Definition <ref>. Let 𝒫: L ^2(Ω; ℝ^V) → L^2(Ω; ℝ^V) be a linear projection operator. We define 𝒬=I - 𝒫, where I is the identity operator. In our setting, for the PDE solution at timestep t u_t ∈ L ^2(Ω; ℝ^V), 𝒫[u_t] is the part of the solution that we observe and 𝒬[u_t] is the unobserved part. Thus, the initial information we receive for the system is 𝒫[u_0]. Applying 𝒫 and 𝒬 to Equation <ref> and using u = 𝒫[u] + 𝒬[u], we get:
∂/∂ t𝒫[u] (t,x) = 𝒫ℒ[u] (t,x) = 𝒫ℒ𝒫 [u] (t,x) + 𝒫ℒ𝒬 [u] (t,x)
∂/∂ t𝒬[u] (t,x) = 𝒬ℒ[u] (t,x) = 𝒬ℒ𝒫 [u] (t,x) + 𝒬ℒ𝒬 [u] (t,x)
Solving for <ref> yields 𝒬[u](t,x) = ∫_0^t exp𝒬ℒ(t-s)𝒬ℒ𝒫[u](s,x) ds + e^𝒬ℒ t𝒬[u_0](t,x).
Plugging into <ref>, we obtain a Generalized Langevin Equation <cit.> for 𝒫[u]:
∂/∂ t𝒫[u] (t,x) = 𝒫ℒ𝒫 [u] (t,x) + 𝒫ℒ∫_0^t exp𝒬ℒ(t-s)𝒬ℒ𝒫[u](s,x) ds + 𝒫ℒ e^𝒬ℒ t𝒬[u_0](t,x)
We will refer to the first summand on the right hand side of <ref> as the Markovian term because it only depends on 𝒫[u](t,x), the second summand as the memory term because it depends on 𝒫[u](s,x) for 0≤ s ≤ t, and the third summand as the unobserved residual as it depends on 𝒬[u_0] which is never observed.
We note that (<ref>) is exact, not an approximation, so it is equivalent to solving the full system. Typically, the term that is most difficult to compute is the exponential of the memory term, and thus several methods to approximate it have been proposed.
In the physics literature, the memory term has been approximated through a perturbation expansion of the exponential <cit.>, or by approximating the operator exp𝒬ℒ(t-s): L ^2(Ω; ℝ^V) → L ^2(Ω; ℝ^V) through operators defined in 𝒫[L ^2(Ω; ℝ^V)] <cit.>. In the machine learning literature, <cit.> develop the equations for the case when the operator 𝒫 kept only the top-k modes, and designed a hybrid approach where the memory term was approximated with an LSTM <cit.>, and then used as an additional input of a numerical solver. In this work, we treat the whole memory term as an operator ℳ: 𝒞([0,T],𝒫[L ^2(Ω; ℝ^V)]) →𝒫[L ^2(Ω; ℝ^V)][
Here 𝒞( A, B ) denotes the space of continuous functions u: A → B.
] to be learnt by a parametrized sequential layer of a Neural Operator.
§ OUR APPROACH
§.§ Training procedure
First, we describe the high-level training scaffolding for our method, namely the way the data is generated, and the loss we use.
Training data: Let u∈𝒞([0,T];L^2(Ω; ℝ^V)) be the solution of the PDE given by Definition <ref>. Let 𝒮 be an equispaced grid in Ω with resolution f, and let 𝒯 be another equispaced grid in [0,T] with N_t+1 points. Given u_0(x)_𝒮, our goal is to predict u(t,x)_𝒮 for t∈𝒯 using a Neural Operator.
Training loss: As it is standard, we proceed through empirical risk minimization on a dataset of trajectories. More specifically, given a loss function ℓ : (ℝ^|𝒮|,ℝ^|𝒮|)→ℝ, a dataset of training trajectories (u(t,x)^(i))_i=0^N, and parametrized maps 𝒢^Θ_t:ℝ^|𝒮|→ℝ^|𝒮| for t∈𝒯, we define:
Θ^* = min_Θ1/N∑_i=0^N-11/N_t∑_t=1^N_tℓ( u(t,x) _𝒮, 𝒢^Θ_t[ u_0(x) _𝒮] )
We then aim to find an adequate architecture choice such that 𝒢^Θ^* has low test error on unseen trajectories of the same PDE.
§.§ The Architecture: Memory Neural Operator
In this section we describe Memory Neural Operator (MemNO), a Deep Learning framework to incorporate memory into Neural Operators.
Let NO^Θ_t be a Neural Operator with L layers,
and denote NO^Θ_t[u_0] the prediction of the solution of the PDE at time t.
We will assume that this Neural Operator follows the Markovian assumption, i.e. we can write:
NO^Θ_t_i+1[u_0] = r_out∘ℓ_L ∘ℓ_L-1∘ ... ∘ℓ_0 ∘ r_in[NO^Θ_t_i[u_0]]
Where r_in: ℝ^|𝒮|→ℝ^|𝒮| × h_0 and r_out: ℝ^|𝒮|× h_L+1→ℝ^|𝒮| are projector operators; ℓ_j: ℝ^|𝒮| × h_j→ℝ^|𝒮|× h_j+1 are parametrized layers; and h_j is the dimension of the j-th hidden layer.
Our goal is to define a network 𝒢^Θ_t
that builds upon NO^Θ_t and can incorporate memory. For this
we
take inspiration from the Mori-Zwanzig theory exposed in Section <ref>.
Comparing (<ref>) with (<ref>),
we identify ℓ_L ∘ℓ_L-1∘ ... ∘ℓ_0 with the Markov term
which models the spatial dynamics.
To introduce the memory term, we interleave an additional residual sequential layer ℳ that acts on hidden representations of the solution at previous timesteps. Concretely, the MemNO architecture can be written as:
𝒢_t_i+1^Θ[u_0] = ℛ_out∘ℒ_L ∘ ... ∘ℒ_k+1∘ℳ∘ℒ_k∘ ... ∘ℒ_0 ∘ℛ_in[𝒢_t_i^Θ[u_0] , 𝒢_t_i-1^Θ[u_0] ,..., u_0 ]
Where -1≤ k ≤ L is a chosen hyperparameter.[
k=L referst to inserting M after all the S layers, and k=-1 refers to inserting M as the first layer.
As we show in Appendix <ref>, our experiments are not very sensitive to the choice of k.
]
Now, the spatial ℒ_j layers are understood to be applied timestep-wise.
That is, if v^(j)(t') is the hidden representation at the j layer for a timestep t'≤ t_i, then ℒ_j+1[v^(j)(t_i), ..., v^(j)(t_0)] = [ℓ_j[v_i^(j)(t_i)], ..., ℓ_j[v_0^(j)(t_0)]],
and analogously for ℛ_in and ℛ_out.
Thus, the ℒ_j layers still follow the Markovian assumption. The memory is introduced through ℳ, which consists of a sequential layer m that is applied to the time history of the hidden representation of the k-th layer, that is ℳ:ℝ^i× |𝒮| × h_k ⟶ℝ^|𝒮| × h_k with (ℳ[v^(k)(t_i), ..., v^(k)(t_0)])_sh = m[v_sh^(k)(t_i), ..., v_sh^(k)(t_0)]. Note that m is the same for each of the |𝒮| × h_k elements of the hidden layer.
The main motivation of our MemNO framework is that it can be utilized with any
existing neural operator layer ℓ, and with any
(causal) sequential model ℳ.
Thus it provides a modular architecture design
which we hope can serve as a useful tool for practitioners.
§ THEORETICAL MOTIVATION FOR MEMORY: A SIMPLE EXAMPLE
In this section, we provide a simple, but natural example of a (linear) PDE, along with (in the nomenclature of Section <ref>) a natural projection operator given by a Fourier truncation measurement operator, such that the memory term in the generalized Langevin equation (GLE) can have an arbitrarily large impact on the quality of the calculated solution.
We will work with periodic functions over [0,2π] which have a convenient basis:
[Basis for 2π-periodic functions] A function f: ℝ→ℝ is 2π-periodic if f(x + 2π) = f(x). We can identify 2π-periodic functions with functions over the torus T:={e^iθ: θ∈ℝ}⊆ℂ by the map f̃(e^ix) = f(x). Note that {e^i x n}_n ∈ℤ is a basis for the set of 2π-periodic functions.
We will define the following measurement operator:
[Fourier truncation measurement]
The operator 𝒫_k: L^2(T;ℝ) → L^2(T;ℝ) acts on f ∈ L^2(T;ℝ), f(x) = ∑_n=-∞^∞ a_n e^i n x as 𝒫_k(f) = ∑_n=-k^k a_n e^i n x.
We will also define for notational convenience the functions {𝐞_n}_n ∈ℤ, where 𝐞_n(x) := e^-inx + e^inx.
Now we consider the following operator to define a linear time-dependent PDE:
Let ℒ: L^2(T;ℝ) → L^2(T;ℝ) be defined as
ℒ u(x) = -Δ u(x) + B · (e^-ix + e^ix) u(x)
for B > 0. Then, we have:
∀ 1 ≤ n ∈ℕ, ℒ (𝐞_n) = n^2 𝐞_n + B (𝐞_n-1 + 𝐞_n+1) & ℒ (𝐞_0) = 2 B 𝐞_1
The crucial property of this operator is that it acts by “mixing” the n-th Fourier basis with the (n-1)-th and (n+1)-th: thus information is propagated to both the higher and lower-order part of the spectrum.
Given the above proposition, we can easily write down the evolution of a PDE with operator ℒ in the basis {𝐞_n}_n ∈ℤ:
Let ℒ be defined as in Proposition <ref>. Consider the PDE
∂/∂ t u(t,x) = ℒ u(t,x)
u(0,x) = ∑_n ∈ℕ_0 a_n(0) 𝐞_n
Let u(t,x) = ∑_n ∈ℕ_0 a_n^(t)𝐞_n.
Then, the coefficients a_n^(t) satisfy:
∀ 1 ≤ n ∈ℕ, ∂/∂ t a_n^(t) =n^2 a_n^(t) + B ( a_n-1^(t) + a_n+1^(t))
∂/∂ t a_0^(t) = 2B a_1^(t)
With this setup in mind, we will show that as B grows, the memory term in (<ref>) can have an arbitrarily large effect on the calculated solution:
Consider the Fourier truncation operator 𝒫_1 and let 𝒬 = I - 𝒫_1. Let u(0, x) have the form in Proposition <ref> for B > 0 sufficiently large, and let a^(0)_n > 0, ∀ n > 0. Consider the memoryless and memory-augmented PDEs:
∂ u_1/∂ t = 𝒫_1 ℒ u_1
∂ u_2/∂ t = 𝒫_1 ℒ u_2 + 𝒫_1ℒ∫_0^t exp𝒬ℒ(t-s)𝒬ℒ u_2(s) ds
with u_1(0,x) = u_2(0,x) = 𝒫_1 u(0,x). Then, u_1 and u_2 satisfy:
∀ t > 0, u_1(t) - u_2(t)_L_2 ≳ B t u_1(t)_L_2
∀ t > 0, u_1(t) - u_2(t)_L_2 ≳ B texp(√(2) Bt)
Note that the two conclusions of the theorem mean that both the absolute difference, and the relative difference between the PDE including the memory term (<ref>) and not including the memory term (<ref>) can be arbitrarily large as B, t →∞.
The choice of ℒ is made for ease of calculation of the Markov and memory term. Conceptually, we expect the solution to (<ref>) will differ a lot from the solution to (<ref>) if the action of the operator ℒ tends to “mix” components in the span of 𝒫 and the span of 𝒬.
If we solve the equation ∂/∂ t u(t,x) = ℒ u(t,x) exactly, we can calculate that u(t)_L_2 will be on the order of exp(2Bt). This can be seen by writing the evolution of the coefficients of u(t) in the basis {𝐞_n}, which looks like:
∂/∂ t[ a_0; a_1; … ] = 𝒪[ a_0; a_1; … ]
where 𝒪 is roughly a tridiagonal Toeplitz operator 𝒪 = [ ⋮ ⋮ ⋮ ⋮ ; … B n^2 B 0 …; … 0 B (n+1)^2 B …; ⋮ ⋮ ⋮ ⋮ ].
The largest eigenvalue of this operator can be shown to be on the order of at least 2B (equation (4) in <cit.>). The Markov term results in a solution of order exp(√(2)Bt) ( (<ref>),(<ref>)), which is multiplicatively smaller by a factor of exp((2 - √(2))Bt). The result in this Theorem shows the memory-based PDE (<ref>) results in a multiplicative “first order” correction which can be seen by Taylor expanding exp(√(2)Bt) ≈ 1 + √(2)Bt + 1/2(√(2)B)^2t^2 + ….
§ MEMORY HELPS WITH LOW-RESOLUTION DATA AND INPUT NOISE: A CASE STUDY
In this section we present a case study for several common PDEs of practical interest, showing that MemNO brings accuracy benefits when the data is supplied in low resolution.
Through our experiments we show
the difference in the performance between
a baseline “memoryless” architecture, which we choose to be Factorized Fourier Neural Operator (FFNO) <cit.>
and a memory-augmented architecture using S4 <cit.>, which we denote as the S4-Factorized Fourier Neural Operator (s4FFNO).
The architectural details for both the
architectures are elaborated upon in
Appendix <ref>.
§.§ Setup: Training and evaluation procedure
To construct our datasets, we first produce discretized trajectories of a PDE for N_t timesteps, i.e. (u(t))_t=0^N_t in a high resolution discretized spatial grid 𝒮^HR⊂ℝ^d, i.e. u(t)∈ℝ^|𝒮^HR|.
We then produce datasets that consist of lower resolution versions of the above trajectories, i.e. on a grid 𝒮^LR of lower resolution f. The low resolution grid would correspond to the observation grid obtained through a physical measurement device, whereas the high resolution grid is used in the numerical solver to obtain trajectories that follow real-world PDE dynamics.
For 1-dimensional datasets, the discretized trajectory on 𝒮^LR is obtained by cubic interpolation of the trajectory in the high resolution grid. In 2D, the discretized trajectory is obtained by downsampling.
We will show results in different resolutions, in which case both train and test trajectories are at such resolution, and the loss function is also computed at the chosen resolution.
Our training loss and evaluation metric is normalized Root Mean Squared Error (nRMSE):
nRMSE(u(t,x) _𝒮^LR, û(t) ) = ‖ u(t,x) _𝒮^LR - û(t) ‖_2/‖ u(t,x) _𝒮^LR‖_2 ,
where ∥·∥_2 is the euclidean norm in ℝ^|𝒮^HR|. More details on training are given in appendix <ref>.
§.§ Kuramoto–Sivashinsky equation (1D): a study in low-resolution
The Kuramoto-Sivashinsky equation (KS) is a nonlinear PDE that is used as a modeling tool in fluid dynamics, chemical reaction dynamics, and ion interactions. Due to its chaotic behavior it can model instabilities in various physical systems. For a viscosity ν, it is written as u_t + uu_x + u_xx + ν u_xxxx = 0.
We generated datasets for KS at different viscosities and resolutions. The results are shown in Table <ref>. We can see s4FFNO outperforms FFNO across these viscosities and resolutions, having an nRMSE that can be more than six times smaller.
We also note that, since the memory layer is applied element-wise in the time dimension, it has very few parameters compared to the spatial layers, as seen in the difference of parameters between s4FFNO and FFNO in column 3.
The key factor for the improved performance of MemNO over memoryless Neural Operators in not the absolute resolution, but rather the resolution relative to the frequency spectrum of the solution. The lower the viscosity, the higher the frequencies that appear in the spectrum. This can be clearly seen in Figure <ref>: in the top row, as viscosities increase, the resolution at which there is a significant difference between s4FFNO and FFNO decreases. In the second row of the figure, we show a visualization of the frequency spectrum of a solution at those viscosities.
We note that even if the initial condition does not contain high frequencies, in the KS equation high frequencies will appear as the system evolves—indeed, this dataset was generated with initial conditions whose maximum Fourier mode was 8. This is in qualitative agreement with the theoretical motivation in Section <ref>—albeit the KS equation is substantially more complicated since it is nonlinear, so it is hard to fully theoretically analyze.
We provide a similar study on 1D Burgers equation
in the Appendix <ref>.
§.§ Navier Stokes equation (2D): study in observation noise
The Navier Stokes equation describes the motion of a viscous fluid.
Like in <cit.>, we consider the incompressible form in the 2D unit torus, which is given by:
∂ w(x,t)/∂ t + u(x,t) ·∇ w(x,t) = νΔ w(x,t) + f(x), x ∈ (0, 1)^2, t ∈ (0,T]
∇· u(x,t) = 0, x ∈ (0, 1)^2, t ∈ [0,T]
w(x, 0) = w_0(x), x ∈ (0, 1)^2
Where w = ∇× u is the vorticity, w_0 ∈ L^2((0, 1)^2; ℝ) is the initial vorticity, ν∈ℝ_+ is the viscosity coefficient, and f ∈ L^2((0, 1)^2; ℝ) is the forcing function. In general, the lower the viscosity, the more rapid the changes in the solution and the harder it is to solve it numerically and with a Neural Operator.
We investigate the effect of memory when adding IID Gaussian noise to the inputs of our neural networks. Noise would represent the observation noise arising from the intrisic error of the measurement device. The noise ϵ∈ℝ^|𝒯|× |𝒮^LR| is sampled IID from a Gaussian distribution ϵ_ts∼𝒩(0, σ), and then added to training and test inputs. During training, for each trajectory a different noise with the same σ is sampled at each iteration of the optimization algorithm. The targets in training and testing represent our ground truth and are not added noise.
In Figure <ref>, we show the results for ν=10^-3 when adding noise levels from σ=0.0 (no noise) to σ=2.048. s4FFNO-2D outperforms FFNO-2D across most noise levels, and the difference between the two is especially significant for noise levels beyond 0.128, where FFNO-2D is around 50% higher than s4FFNO-2D (note the logarithmic scale). For this viscosity, adding small levels of noise actually helps training, which was also observed in other settings in <cit.>. Figure <ref> shows the same experiment performed with ν=10^-5. Again, s4FFNO-2D outperforms FFNO-2D across most noise levels. FFNO-2D losses are similarly around 50% higher for noise levels above 0.032. In this viscosity, adding these levels of noise does not help performance.
§.§ Relationship with fraction of unobserved information
In this section, we provide a simple experiment to quantify the
effect of the fraction of unobserved information on the performance of memory based models.
Given a grid of resolution f, we define the Fourier truncation measurement 𝒫_f/2 as in Section <ref>,
which simply keeps the top f/2+1 modes and discards the other high frequency modes.
Assume u(t)∈ L^2(Ω; ℝ^V) is the solution of a 1-dimensional PDE at time t, and a_n^(t) for n∈ℤ is its Fourier Transform. We define the quantity:
ω_f = 1/N_t∑_i=1^N_t∑_|n|≥f/2 |a_n^(t_i)|^2/∑_n∈ℤ |a_n^(t_i)|^2
ω_f is approximate indicator of the
amount of information that is lost when the solution of the PDE is observed at resolution f across time.
We show that there is a positive correlation between ω_f and the difference in nRMSE between FFNO and s4FFNO for the KS experiment in Figure <ref>, and also the for Burgers' experiments of Appendix<ref> in Figure <ref>.
This demonstrates the benefits
of memory as a way to compensate missing information in the observation.
[18]ht0.4
< g r a p h i c s >
Values of ω_f and the difference in nRMSE between FFNO and s4FFNO for different resolutions in the KS experiment of Section <ref> with ν=0.1.
ω_f is averaged over all trajectories in the dataset. The value is computed approximating the continuous Fourier modes with Discrete Fourier modes of the solution in the highest resolution available (512 for KS).
§ CONCLUSION AND FUTURE WORK
We study the benefits of maintaining memeory while modeling
time dependent PDE systems. Taking inspiration from the Mori-Zwanzig formulation, we show that when
we only observe part of the PDE initial condition (for example, PDEs observed on low-resolution or with
input noise), the system is no longer Markovian, and the dynamics depend on a
memory term.
To this end, we introduce
MemNO, an architecture that combines Fourier Neural Operator (FNO) and
the S4 architecture.
Through our experiments on different 1D and 2D PDEs, we show that the MemNO
architecture outperforms the memoryless baselines.
We present several avenues for future work. First, our experiments on
observation noise are limited to the setting where the input noise is IID.
Further, extending the experiments and observing the effects of memory in
more real-world settings (for example, with non-IID noise or in the presence of aliasing) is a fertile ground for future work, and also necessary to ensure that the application of this method does not have unintended negative consequences when broadly applied in society.
Lastly, while we limit our
study of the effects of memory to FNO based architectures, performing
similar studies for different architectures like Transformer based neural operators <cit.>
and diffusion based operators <cit.>
is an interesting direction for future work.
§ ACKNOWLEDGEMENTS
RBR is supported by the “la Caixa” Foundation (ID 100010434). The fellowship code is LCF/BQ/EU22/11930090.
TM is supported in part by CMU Software Engineering Institute via Department of Defense under contract FA8702-15-D-0002.
AR is supported in part by
NSF awards IIS-2211907, CCF-2238523, and Amazon Research.
The authors also thank Cartesia AI for their generous provison of computational resources.
plainnat
§ EXTENDED RELATED WORK
Neural Operators. The Fourier Neural Operator (FNO) is a Neural Operator that performs a transformation in the frequency space of the input <cit.>. Other models have proposed different inductive biases for Neural Operators, including physics based losses and constraints <cit.>,
using Deep Equilibrium Model (DEQ) <cit.>
to design specialized
architectures for steady-state (time-independent)
PDEs <cit.>, and using local message passing
Graph Neural Networks (GNNs) <cit.>
based encoders to model irregular geometries <cit.>.
Other methodologies to solve for
PDEs include methods like <cit.>
that use the U-Net <cit.>
type architectures
and works like <cit.>
that introduce different Transformer <cit.> based
neural solution operators for modeling both time-dependent and time-independent PDEs.
While most of these methodology
are designed for time-dependent PDEs,
there is no clear consensus of how to model
the past-states to predict future states, and most
of these methods predict the PDE states over time
in an auto-regressive way by conditioning
the model on varying lengths of the past states <cit.>.
Foundation models. Lately, there have been community efforts
towards
creating large scale
foundational models for modeling multiple PDE families <cit.>, and weather prediction <cit.>.
We hope that our study is useful in informing the architectural
design of future models.
§ NETWORK ARCHITECTURES
Factorized Fourier Neural Operator (FFNO) (<cit.>): This model is a refinement over the original Fourier Neural Operator (<cit.>). Given a hidden dimension h and a spatial grid 𝒮, its layers ℓ: ℝ^|𝒮|× h→ℝ^|𝒮|× h are defined as:
ℓ(v) = v + Linear_h,h'∘σ∘Linear_h',h∘𝒦 [v]
where σ is the GeLU activation function <cit.> and h' is an expanded hidden dimension. 𝒦 is a kernel integral operator that performs a linear transformation in the frequency space. Denoting by
FFT_α, IFFT_α are to the Discrete Fast Fourier Transform and the Discrete Inverse Fast Fourier Transform along dimension α <cit.>, it can be written as:
𝒦[v] = ∑_α∈{1,...,d}IFFT[R_α·FFT_α[v]]
for learnable matrices of weights R_α∈ℂ^h^2 × k_max. k_max is the maximum number of Fourier modes which are used in 𝒦. We use all Fourier modes by setting k_max = f/2.
In our experiments, The FFNO model consists of 4 FFNO layers. For experiments in 1D, the hidden dimensions are all 128 (h_j=128 for j=0,1,2,3) and the expanded hidden dimension of FFNO's MLP h' is 4 · 128. For experiments in 2D, the hidden dimensions are all 64 and the expanded hidden dimension is 4 · 64.
S4 - Factorized Fourier Neural Operator (s4FFNO): This model uses our MemNO framework. To discern the effect of memory, all layers except the memory layer will be the same as FFNO. For the memory layer, we choose an S4 layer <cit.> with a state dimension of 64 and a diagonal S4 (S4D) kernel.[The S4 repository has two available kernels, the diagonal S4 (S4D) and the Normal Plus Low Rank S4 (S4NPLR). In our experiments, we didn't find a significant difference between the two, and chose S4D for simplicity.]
For all our models, we use a simple spatial positional encoding E. In 1D, if the grid has f equispaced points in [0,L], then E ∈ℝ^f and the positional encoding is defined as E_i= i/L for 0≤ i ≤ f-1. In 2D, if we have a f× f points 2D equispaced grid in [0,L_x]×[0,L_y], the positional encoding is defined as E_ij = (i/L_x, j/L_y). In the input lifting operator ℛ_in, the input and the grid are stacked and a Linear layer from ℝ^2 →ℝ^h_0 is applied element-wise. For the decoder ℛ_out, we use another Linear layer (without positional encoding).
§ BURGERS' EQUATION (1D): A STUDY ON LOW-RESOLUTION
The Burgers' equation with viscosity ν is a nonlinear PDE used as a modeling tool in fluid mechanics, traffic flow, and shock waves analysis. It encapsulates both diffusion and advection processes, making it essential for studying wave propagation and other dynamic phenomena. It is known for exhibiting a rich variety of behaviors, including the formation of shock waves and the transition from laminar to turbulent flow. The viscous Burgers' equation is written as:
u_t + uu_x = ν u_xx
We used the publicly available dataset of the Burgers' equation in the PDEBench repository (<cit.>) with viscosity e^-3, which is available at resolution 1024. We compare our models at resolutions 64, 128, 256, 512 and 1024 and show the results at figure <ref>. As in the case of KS, s4FFNO outperforms FFNO, especially at low resolutions. Furthermore, we show the differnece in nRMSE at each timestep in figure <ref>. We observe that at the first timestep there is no difference between the two models - this makes sense because s4FFNO has the exact same architecture as FFNO for the first timestep. Yet as the initial condition is rolled out, there is more history of the trajectory and the difference between FFNO and s4FFNO increases.
§.§ Relationship with fraction of unobserved information
As mentioned in Section <ref>
we measure the correlation of
ω_f defined in (<ref>) with the difference in the nRMSE
between FFNO and s4FFNO. The results can be seen in Figure <ref>
§ DATA GENERATION
§.§ Kuramoto–Sivashinsky equation
For Kuramoto-Sivashinsky (KS) equation is given by:
u_t + uu_x + u_xx + ν u_xxxx = 0 (t,x) ∈ [0,T]×[0,L]
u(0,x) = u_0(x) x ∈ [0,L]
We use periodic boundary conditions. Our data generation method is very similar to the one of <cit.>. We employ the method of lines <cit.>, where the spatial dimension is discretized, and the PDE is transformed to a systems of Ordinary Differential Equations (ODEs), one per point in the grid. In order to compute the spatial derivative of the solution at each point in the grid, a pseudospectral method is used, where derivatives are computed in frequency space and then converted to the original space through a Fast Fourier Transform. This method is implemented in the method of the package <cit.>. Similarly, the system of ODEs is solved numerically with a implicit Runge-Kutta method of the Radau IIA family of order 5 <cit.>, which is implemented in the method of . We refer to the code provided in <cit.> to reproduce this data generation, however certain small modifications have to be made, like using a fixed Δ t per trajectory and increasing the number of modes in the initial condition.
As for the PDE parameters, we use L=64 and T=5. For the initial condition, we use a superposition of sinusodial waves:
u_0(x) = ∑_i=0^20 A_i sin(2π k_i/L x + ϕ_i)
where for each trajectory, the A_i are sampled from a continuous uniform in [-0.5,0.5], the k_i are sampled from a discrete uniform in {1,2,...,8}, and the ϕ_i are sampled from a uniform uniform in [0,2π]. We discretize [0,T] into 26 equispaced points separated by Δ t=0.2[
We reiterate that, as opposed to <cit.>, we don't have a random Δ t per trajectory], keep the 25 temporal points generated by the numerical solver and discard our initial condition.
In the experiments in section <ref>, for each of the four values of the viscosity (0.15, 0.125, 0.1, 0.075), we generated a dataset with spatial resolution 512 with 2048 training samples and 256 test samples. For the experiment in the sequential model ablation in section <ref>, we generated one dataset with viscosity 0.15 in resolution 256, 4096 training samples and 256 test samples.
§.§ Burgers' 1D equation
The 1D Burgers' equation can be written as:
u_t + uu_x = ν u_xx (t,x)∈ [0,T]×[0,L]
For the Burgers' equation, we take the publicly available Burgers' dataset of PDEBench <cit.> with viscosity 0.001. Out of the 10000 samples of the dataset, we use a 10% for testing. For training, we found it sufficient to use 2048 samples. Additionaly, for training and testing we only used the 20 first timesteps, since we observed that after the 20th timestep the diffusion term of the equation u_xx attenuates all high frequencies and the solution changes very slowly.
§.§ Navier Stokes 2D equation
The incompressible Navier Stokes equation in the 2D unit torus is given by:
∂ w(x,t)/∂ t + u(x,t) ·∇ w(x,t) = νΔ w(x,t) + f(x), x ∈ (0, 1)^2, t ∈ (0,T]
∇· u(x,t) = 0, x ∈ (0, 1)^2, t ∈ [0,T]
w(x, 0) = w_0(x), x ∈ (0, 1)^2
For the data generation, we follow the method of <cit.>, yet with different temporal and spatial grids. The initial conditions w_0 are sampled from a Gaussian Random field 𝒩(0, 7^3/2(-Δ + 49I)^-2.5) with periodic boundary conditions. The forcing term is f(x_1,x_2) = 0.1 (sin2π(x_1+x_2) + cos2π(x_1+x_2)). At each timestep, the velocity is obtained from the vorticity by solving a Poisson equation. Then, spatial derivatives are obtained, and the non-linear term is computed in the physical space and then dealiased. A Crank-Nicholson scheme is used to move forward in time, with a timestep of 10^-4. We use a 512x512 spatial grid which is then downsampled to 64x64 for our experiments. For the viscosity ν=10^-3, we use a final time of 16 seconds and sample every 0.5 seconds. For the viscosity ν=10^-5, we use a final time of 3.2 seconds and sample every 0.1 seconds. For more details on the data generation algorithm, we refer to <cit.>.
§ TRAINING DETAILS
In this section, we will provide a detailed description of the training hyperparameters used in the KS experiments of Section <ref>, in the Burgers experimente of section <ref> and the Navier Stokes experiments of section <ref>. We start with the training hyperparameters. All our experiments used a learning rate of 0.001. For the number of epochs, in KS and Burgers, the training was done over 200 epochs with cosine annealing learning scheduling <cit.>; whereas in Navier Stokes we trained for 300 epochs and halved the learning rate every 90. As for the number of samples, KS and Burgers were trained with 2048 samples and Navier Stokes with 1024 samples. Lastly, we observed that the batch size was a sensitive hyperparameter for both the memory and memoryless models (it seemed to affect both equally) so we run a sweep at each experiment to select the best performing one. In the results shown in the paper, KS and Navier Stokes use a batch size of 32, and Burgers a batch size of 64.
Another relevant detail is the memory length in training, that is, the number of past states that were fed to the memory layer in the MemNO model. In the KS and Burgers experiments, the maximum memory length was 25 (which is the same as the number of timesteps of the dataset). That means that for the last timestep, the previous 24 states were fed into the memory layer. However, for GPU memory limitations in Navier Stokes the memory length was 16, half the number of timesteps of each trajectory in the dataset. In this case, the memory was reset after the 16th timestep, i.e. for the 16th timestep the 15 past states were fed to the memory model, yet for the 17th timestep only the 16th timestep was fed. Then, for the 18th timestep, the 17th and 16th were fed, and so on.
As in <cit.>, experiments were trained using teacher forcing. This means that for the prediction of the i-th timestep during training, the ground truth of the i-1 previous steps was fed to the model (as opposed to the prediction of the model for such steps).
We run our experiments on A6000/A6000-Ada GPUs. The Navier Stokes 2D experiments required around 34GB of GPU memory for the batch size of 32 and took around 5 hours to finish, whereas the rest of experiments in 1D required a lower GPU memory (less than 10GB) and each run normally took less than an hour.
§ ABLATIONS ON THE MEMORY LAYER
In this section we present two ablations regarding the memory layer of MemNO.
§.§ Ablation: Choice of sequential model
In section <ref> we introduced MemNO as an architecture framework which allowed the introduction of memory through any choice of a sequential layer, which we chose as S4 in the previous experiments. In this section, we explore two other candidates for the sequential layers: a transformer and an LSTM. We introduce Transformer-FFNO (T-FFNO) and LSTM-FFNO as two models that are identical to s4FFNO except in the sequential layer, where a transformer and an LSTM are used respectively. The LSTM model only has one layer and the transformer layer includes causal masking and a positional encoding. The positional encoding for pos across the time dimension and i across the hidden dimension is given by:
PE(pos, 2i) = sin(pos/10000^2i/dim_model)
PE(pos, 2i + 1) = cos(pos/10000^2i/dim_model)
We show results for the KS dataset with viscosity ν=0.15 and different resolutions. This dataset was generated using a resolution of 256 and contains 4096 samples, twice as many compared to the KS datasets of <ref>, given that transformers are known to perform better in high-data regimes. The results are shown in Figure <ref>. TFFNO performs significantly worse than s4FFNO across almost all resolutions, and even performs worse than FFNO. In constrast, LSTM-FFNO outperforms FFNO, which shows that MemNO can work with other sequentials model apart from S4. The memory term in Equation <ref> is a convolution in time, which is equivalent to the S4 layer and very similar to a Recurrent Neural Network (RNN) style layer, as showed in <cit.>. We believe that this inductive bias in the memory layer is the reason why both s4FFNO and LSTM-FFNO outperform FFNO. However, S4 was designed with a bias for continuous signals and has empirically has proven better performance in these kind of tasks <cit.>, which is in agreement with its increased performance over LSTMs in this experiment. Additionally, we observed that LSTMs were instable to train in Navier Stokes 2D datasets.
Lastly, we make two remarks. Firstly, we believe that transformers performed worse due to overfitting, given that the train losses were normally comparable or even smaller than the train losses of the rest of the models at each resolution. Modifications of the transformer model or to the training hyperparameters as in other works <cit.> might solve this issue. Secondly, recently there has been a surge of new sequential models such as Mamba <cit.>, RWQK <cit.>, xLSTM <cit.> or LRU <cit.>. We leave it as future work to study which of these sequential model has better overall performance, and hope that our study on the settings where the memory effect is relevant can help make accurate comparisons between them.
§.§ Ablation: memory layer configuration
In Section <ref> we introduced the memory layer in MemNO as a single layer to be interleaved with Neural Operator layers. In our experiments, we inserted it after the second layer of a four layer Neural Operator. In this section, we explore the impact of having different layer configurations, including the possibility of having several memory layers. We will denote the configurations with a sequence of S and T letters. S means a Neural Operator layer (some sort of Spatial convolution), and T a memory layer (some sort of Time convolution). For example, SSTSS denotes the architecture of our experiments, were we have 2 Neural Operators layers, followed by a memory layer, followed by other 2 Neural Operator layers. Similarly, SSSST denotes 4 Neural Operators layers followed by a memory layer. In Table <ref>, we present the results for the KS dataset with ν=0.1 and final time of 4 seconds for several models. We include the s4FFNO model we used in previous experiments in the first row (with configuration SSTSS), and the FFNO model in the last row. In the middle rows, we show different configurations of memory and Neural Operator layers. It can be observed that all models with at least a memory layer outperform FFNO. There are slight differences between configurations, yet we focused mainly on the comparison to the memoryless model. For that reason, we fixed SSTSS configuration in our previous experiment, which was the most efficient (only one memory layer) and symmetric. We leave as further work determining if there are settings where a given configuration pattern can be substantially better than the rest.
§ APPENDIX: QUANTIFYING THE EFFECT OF MEMORY
We proceed to the (<ref>) first. Note that u_1(t), ∀ t ≥ 0 can be written as u_1(t) = a_0^(t)𝐞_0 + a^(t)_1 𝐞_1. Moreover, by Proposition <ref>, we have
∂ a^(t)_0/∂ t = 2B a^(t)_1
∂ a^(t)_1/∂ t = a_1^(t) + B a_0^(t)
In matrix form, these equations form a linear matrix ODE:
∂/∂ t[ a_0^(t); a_1^(t) ] = [ 0 2B; B 1 ][ a_0^(t); a_1^(t) ]
The solution of this ODE is given by [ a_0^(t); a_1^(t) ] = exp(t [ 0 2B; B 1 ]) [ a_0^(0); a_1^(0) ]. By the first statement of Lemma <ref> and the non-negativity of a_0^(0), a_1^(0), we get:
a_0^(t) ≤ 10 e^√(2)Bt(a_0^(0) + a_1^(0)),
a_1^(t) ≤ 10 e^√(2)Bt(a_0^(0) + a_1^(0))
We proceed to (<ref>). Note that for any s ≥ 0, we can write u_2(s) = â_0^(s)𝐞_0 + â_1^(s)𝐞_1 with â_0^(0) = a_0^(0) and â_1^(0) = a_1^(0). By Proposition <ref>, we have
𝒬ℒ u_2(x) = B â_1^(s)𝐞_2(x)
Moreover, given a function v(x),
the action of the operator exp𝒬ℒ(t̃) on v is given by the solution w(t̃, x) to the PDE
∂/∂ t w(t,x) = 𝒬ℒ w(t,x)
w(0,x) = v(x)
If w(t,x) = ∑_n ∈ℕ_0 b_n^(t)𝐞_n and ∀ n ∈ℕ_0, b_n^(0)≥ 0, we are interested in solving the previous PDE with initial conditions b^(0)_2 = B â_1^(s) and b^(0)_n=0 ∀ n≠ 2.
We claim that the coefficients â_n^(t)≥ 0 ∀ t>0 and ∀ n∈{0,1}. For t=0 this is by definition, and we will prove it for all t by way of contradiction. Suppose the claim is not true, then there exists a t^*>0, and some n^* ∈{0,1} such that â_n^*^(t^*)=0, and â_n^(s)>0 ∀ n∈{0,1} and ∀ s < t^*.
But from continuity this implies that there exists 0<t'<t^* such that ∂/∂ tâ_n^*^(t')<0. However, it can be easy to see that if â_n^(s)>0 ∀ s≤ t', then 𝒫_1 ℒ u_2(t')>0 and 𝒫_1ℒ∫_0^t'exp𝒬ℒ(t-s)u_2(s) ds > 0. Therefore, from (<ref>), ∂/∂ tâ_n^*^(t')>0, which is a contradiction.
This claim implies that b_n^(0)≥0 ∀ n ∈ℕ, and in turn it implies that b_n^(t)≥ 0 ∀ n∈ℕ, t>0.
Applying 𝒬ℒ results in the following inequalities for the coefficients b_1^(t), b_2^(t), b_3^(t):
∂/∂ t b^(t)_1 ≥ b^(t)_1 + B b^(t)_2 ≥ B b_2^(t)
∂/∂ t b^(t)_2 ≥ B b^(t)_1 + 4 b^(t)_2 + B b^(t)_3 ≥ B b^(t)_1 + B b^(t)_3
∂/∂ t b^(t)_3 ≥ B b^(t)_2 + 9 b^(t)_3 ≥ B b^(t)_2
Thus, we can write a linear matrix ODE for the vector (b_1^(t), b_2^(t), b_3^(t)):
∂/∂ t[ b_1^(t); b_2^(t); b_3^(t) ]≥[ 0 B 0; B 0 B; 0 B 0 ][ b_1^(t); b_2^(t); b_3^(t) ]
Therefore, using Lemma <ref>, for sufficiently large B we have b_2^(t-s)≥ B e^√(2)B(t-s)/10â_1^(s).
Hence, if we write ∫_0^t exp𝒬ℒ(t-s)𝒬ℒ u_2(s) ds in the basis {𝐞_n}_n ∈ℕ_0, the coefficient for 𝐞_2 will be lower bounded by
∫_0^t 1/10 B e^B (t-s) a^(s)_1 ds
Applying the second statement of Lemma <ref> and using the non-negativity of a^(0)_0 and a^(0)_1, we have â^(s)_1 ≥1/10 e^√(2)B s(a_0^(0) + a_1^(0)). Hence, the coefficient for 𝐞_2 is lower bounded by
∫_0^t1/10 B e^√(2) B (t-s)1/10 e^√(2)B s(a_0^(0) + a_1^(0)) ds ≥B t/100 e^√(2)Bt (a_0^(0) + a_1^(0))
We finally need to consider what happens after applying the outermost operator 𝒫_1 ℒ. Because of Proposition <ref> again, applying ℒ makes the coefficient in front of 𝐞_1 at least B^2 t/100 e^√(2)Bt(a_0^(0) + a_1^(0)). Finally, applying 𝒫_1 preserves the coefficient in front of 𝐞_1.
Hence, equation (<ref>) results in the following evolution inequalities:
∂â^(t)_0/∂ t ≥ 2B â^(t)_1
∂â^(t)_1/∂ t ≥â_1^(t) + B â_0^(t) + B^2 t/100 e^√(2)Bt(a_0^(0) + a_1^(0))
Using the second statement of Lemma <ref> again we have that â_0(t) ≥1/10 e^√(2)B s(a_0^(0) + a_1^(0)). Thus, dropping the (positive) term â_1^(t) in equation <ref>, we have:
∂â^(t)_1/∂ t ≥(1/10 + B t/100)B e^√(2)Bt(a_0^(0) + a_1^(0))
Integrating this equations yields:
â^(t)_1 ≥ a^(0)_1 + 1/200 e^√(2) B t( √(2) B t + 10 √(2) - 1 ) (a_0^(0) + a_1^(0))
Thus, we have a_1^(t)≳ B t e^√(2)Bt(a_0^(0) + a_1^(0)). Together with equation <ref>, the claim of the Theorem follows.
There exists B>0 sufficiently large such that for all t>0 the matrix [ 0 2Bt; Bt t ] satisfies:
∀ i, j ∈{1,2}, exp([ 0 2Bt; Bt t ])_i,j ≤ 10 exp(√(2) Bt)
∀ i, j ∈{1,2}, exp([ 0 2Bt; Bt t ])_i,j ≥1/10exp(√(2) Bt)
By direct calculation, we have:
exp([ 0 2Bt; Bt t ])
= 1/2 √(8B^2 + 1)[ √(8 B^2 + 1) g(B,t) - h(B,t) 4B h(B,t); 2B h(B,t) √(8 B^2 + 1) g(B,t) + h(B,t) ]
where:
g(B,t) = e^1/2(√(8 B^2 + 1)+1)t + e^-1/2(√(8 B^2 + 1) - 1)t
h(B,t) = e^1/2(√(8 B^2 + 1)+1)t - e^-1/2(√(8 B^2 + 1) - 1)t
Thus, the statement follows.
For all B > 0, the matrix [ 0 B 0; B 0 B; 0 B 0 ] satisfies:
∀ i, j ∈{1,2,3}, exp([ 0 B 0; B 0 B; 0 B 0 ])_i,j ≥1/10exp(√(2) B)
By direct calculation:
exp([ 0 B 0; B 0 B; 0 B 0 ])_i,j =
1/4 e^-√(2) B[ 2 e^√(2) B + e^2 √(2) B + 1 √(2) e^2 √(2) B - √(2) -2 e^√(2) B + e^2 √(2) B + 1; √(2) e^2 √(2) B - √(2) 2 (e^2 √(2) B + 1) √(2) e^2 √(2) B - √(2); -2 e^√(2) B + e^2 √(2) B + 1 √(2) e^2 √(2) B - √(2) 2 e^√(2) B + e^2 √(2) B + 1 ]
Thus, the statement follows.
|
http://arxiv.org/abs/2409.03291v1 | 20240905065513 | LLM Detectors Still Fall Short of Real World: Case of LLM-Generated Short News-Like Posts | [
"Henrique Da Silva Gameiro",
"Andrei Kucharavy",
"Ljiljana Dolamic"
] | cs.CL | [
"cs.CL",
"cs.AI",
"cs.CR",
"cs.LG",
"I.2.7; K.6.5"
] |
[
[
September 5, 2024
=====================
§ ABSTRACT
With the emergence of widely available powerful LLMs, disinformation generated by large Language Models (LLMs) has become a major concern. Historically, LLM detectors have been touted as a solution, but their effectiveness in the real world is still to be proven. In this paper, we focus on an important setting in information operations—short news-like posts generated by moderately sophisticated attackers.
We demonstrate that existing LLM detectors, whether zero-shot or purpose-trained, are not ready for real-world use in that setting. All tested zero-shot detectors perform inconsistently with prior benchmarks and are highly vulnerable to sampling temperature increase, a trivial attack absent from recent benchmarks. A purpose-trained detector generalizing across LLMs and unseen attacks can be developed, but it fails to generalize to new human-written texts.
We argue that the former indicates domain-specific benchmarking is needed, while the latter suggests a trade-off between the adversarial evasion resilience and overfitting to the reference human text, with both needing evaluation in benchmarks and currently absent. We believe this suggests a re-consideration of current LLM detector benchmarking approaches and provides a dynamically extensible benchmark to allow it (<https://github.com/Reliable-Information-Lab-HEVS/dynamic_llm_detector_benchmark>).
§ INTRODUCTION
The misuse of large language models (LLMs) for propaganda, inciting extremism, or spreading disinformation and misinformation has been a major concern from the early days of LLMs <cit.>. While this concern has led in the past LLM developers to withhold their most powerful models <cit.>, powerful LLMs and multimodal generative models have now been published despite such concerns persisting <cit.>. With the release of powerful open-weight LLMs that can be deployed on commodity hardware such as LLaMA <cit.>, Phi <cit.> or Zephyr <cit.>, LLM misuse can no longer be mitigated through model adjustment or inputs/outputs monitoring.
§.§ LLMs in Information Operations
A common goal of information operations (intentional and coordinated efforts to modify the public perception of reality with ulterior motives) is to modify the perception of the current situation <cit.>. A well-documented and effective approach to achieve that leverages social networks by seeding them with news-like narratives. While the original seeding might have a minimal impact, some narratives will trigger a strong partisan response and will be re-shared by key genuine users, who will customize the narrative for their audience and engage in debate to further it <cit.>. While in some cases such misinformation is picked up by traditional news outlets, adding to its credibility, the quantity is a quality of itself, and continuous repetition of the same narratives through different channels leads to their acceptance as truths <cit.>. Given the competition for attention on social media, both seeded narratives and legitimate news tend to be shared as short-form pots of 260-520 characters with links and images. Such format is common to Twitter (now X), Bluesky, Meta Threads, and the largest Mastodon instances.
In the past, this technique has been shown to be an excellent conductor of counterfactual claims and a poor one for rebuttals and factchecks <cit.>. However, large-scale information operations on them have been, until recently, easily detectable due to exact text reuse, inconsistencies in online persona, and lack of non-trivial interaction with other users. Such failures are understandable, given the scope of the activities and the cost of human operators. However, LLMs drastically alter the cost-quality tradeoff <cit.>. Combined with the reports that LLMs are better than humans at both personalized political persuasion and disinformation concealment <cit.>, LLM-augmented information operations promise to be radically more effective and more difficult to detect and counter.
§.§ LLM Detectability
Unfortunately, humans do not distinguish well LLM-generated texts from human-written ones <cit.>. Early in the generative LLM development <cit.> proposed that accurate LLM detectors could mitigate that issue. Unfortunately, follow-up research rapidly discovered that LLM detectors failed if generation parameters changed <cit.>, output was paraphrased <cit.>, or barely more complex prompts were used <cit.>.
The issue gained in salience with the release of ChatGPT, leading to several large-scale LLM detector benchmarks <cit.>, with most recently <cit.>. Unfortunately, with the disparate performance of detectors for different types of texts, most benchmarks suffer from the same pitfalls as many other ML-based security solutions <cit.>, and there is still no consensus as to whether LLM detectors are ready for real-world applications <cit.>. Here, we try to address both the methodological issues and decide on the detectors' real-world usefulness in our setting.
§.§ Setting and Attacker Capabilities
An attacker with arbitrary capabilities is neither realistic nor can realistically defended against. Consistently with <ref>, we focus on moderately sophisticated attackers. We assume they are capable of deploying on-premises SotA LLMs in the 1-10B parameter range and familiar with adversarial evasion strategies available at inference, such as generation parameters modification <cit.>, paraphrasing <cit.>, or alternative prompting strategies <cit.>. We assume that the attacker cannot evasively fine-tune the generative models.
We assume that the attacker is seeking to generate news-like content of approximately 500 characters, as a completion of a headline or opening sentence, that needs to be detected by a social media operator in an environment where LLM-generated news-like content is not predominant and true positive labels are not available, requiring a low target false positive rate (FPR).
§.§ Contributions
* We consolidated best practices for LLM detector evaluation and developed a dynamic benchmark integrating them, allowing simple extension to new domains
* We show that SotA zero-shot detectors are vulnerable to trivial adversarial evasion in ways not reflected by prior benchmarks
* We comprehensively benchmarked custom detector training strategies and discovered a detector architecture achieving a robust generalization across unseen LLMs and attacks
* We demonstrate that such adversarially robust custom detectors overfit their reference human data, indicating a tradeoff that needs to be tested
* Based on these results, we conclude that LLM detectors are currently not ready for real-world usage to counter LLM-generated disinformation
§ BACKGROUND AND RELATED WORK
§.§ Trained Detectors
LLMs have led to a pardigm shift, from purpose-training ML models, to fine-tuning base models. Rather than re-training a new model for each application from scratch, a large model is first pretrained on a large quantity of data, and is then fine-tuned to adapt it to downstream task. This paradigm is particularly well-suited for classification tasks as demonstrated by <cit.>. As we can formulate detection as a classification task, multiple papers take this approach, historically pioneered by <cit.>. It is also currently considered a SotA approach for training custom LLM detectors <cit.>
BERT-based detectors
BERT (see <cit.>) and its subsequent improved versions, RoBERTa and Electra (see <cit.> and <cit.>), have gained widespread adoption for classification tasks. Their bi-directional encoding architecture offers an advantage over the autoregressive decoder-only architecture, given the ability to account for the context of both the preceding and following context rather than just the preceding, as well as obligatory anchoring to the provided text. Moreover, their relatively small size, such as the 300M parameters for RoBERTa Large compared to the 175B parameters for GPT-3, makes them highly practical.
§.§ Zero-Shot Detectors
Trained detectors show promising results in multiple works such as in <cit.>. However, as highlighted by <cit.>, these trained detectors fail to generalize to text distribution shifts. Zero-shot detectors such as in <cit.> and <cit.> can be seen as an appealing alternative to mitigate this issue. Zero-shot detectors, i.e., detectors not relying on training, are more suitable when we try to detect text not coming from a specific distribution. They are also easier to use in practice since we do not need to train the detector on the domains, although at the price of generally being more resource-intensive due to using larger models.
Fast-DetectGPT is a SoTA open-source detector, reported to perform well across domains and common attack strategies <cit.>, scoring within in overall top for FPRs < 5% in adversarial third-party benchmarks <cit.>.
GPTZero is one of the most widely used commercial LLM detectors <cit.>, consistently included into third-party benchmarks.
RoBERTa-Base-OpenAI is a BERT-based detector fine-tuned by OpenAI to detect GPT2 output <cit.>. While it is not a zero-shot detector per se, it is still commonly benchmarked and used as such, with some benchmarks reporting good performance<cit.>. At the time of writing (May 2024), it was downloaded over 159,000 times in the previous three months, according to its HuggingFace repository.
§.§ Benchmarking Detectors
Adversarial evasion setting is inherent to LLM detectors; its performance cannot be detached from performance against potential attacks. Multiple works have attempted to tackle this issue, generally focusing on specific attacks. For instance, <cit.> and <cit.> introduced a paraphrasing attack they demonstrated to be efficient, while <cit.> investigated evasion through alternative decoding strategies.
Generative generalization benchmarking, be it across LLMs <cit.> or domains <cit.> is equally essential. LLM generation can be useful to an attacker in multiple contexts, and with a proliferation of widely available powerful LLMs, an attacker cannot be assumed to restrict themselves to a single generative model.
Human-text generalization evaluation is, unfortunately, all but absent from LLM detectors benchmarks, despite reported issues in real-world usage <cit.>. Existing benchmarks report - at best detection rates for additional domains, without reporting FPRs.
Unfortunately, systematic studies remain few, and the recent benchmarks attempting them, such as <cit.> suffer from issues, notably failing to include common attacks, such as temperature increase <cit.>, and evaluation of FPRs in unseen domains. As static benchmarks optimized for detector evaluation, they are difficult to add new attacks to or transfer to new domains, making them unsuitable for our application.
RoBERTa-Base-OpenAI is a BERT-based detector fine-tuned by OpenAI to detect GPT2 output <cit.>. While it is not a zero-shot detector per se, it is still commonly benchmarked and used as such, with some benchmarks reporting good performance<cit.>. At the time of writing (May 2024), it was downloaded over 159,000 times in the previous three months, according to its HuggingFace repository.
§ METHODOLOGY
All results can be replicated with code provided in the experimental repository: <https://github.com/Reliable-Information-Lab-HEVS/text_llm_detector>. English is the only language considered here, with all datasets, prompts, and fine-tuning data in English.
§.§ Detector Models
To evaluate a domain-specific pretrained detector, generally reported to be one of the best detection methods <cit.>, we evaluated three base pre-trained LLMs: RoBERTa-Large, Distil-RoBERTa, and Electra-Large[All download links for models, datasets, and code are available in the appendix table <ref>, in the appendix <ref>]. RoBERTa and Electra have been shown to achieve improved results over BERT on fine-tuning to classification tasks in <cit.> and <cit.> thanks to a different pre-training method <cit.>. We added Distil-RoBERTa to evaluate the performance of a smaller model (82.8M parameters), more usable at scale.
To test zero-shot detectors, we evaluated the three detectors mentioned previously: RoBERTa-Base-OpenAI, due to its usage, and Fast-DetectGPT and GPTZero, generally considered as representatitve of SotA open-source and commercial detectors, respectively <cit.>.
§.§ Generating the Datasets
We chose six different generator LLMs. Three non-chat foundational models, Phi-2, Gemma-2B, and Mistral-0.1, are the only ones used to train the custom detectors. Three chat models, Gemma-chat, Zephyr, and LLama-3-8B-Instruct, are only used for testing and are representative of commonly used SotA open-weight LLMs.
To create neural fake news, leveraged the CNN Dailymail news dataset, representative of American English news. To obtain the LLM-generated articles, we take a news article from the dataset, clean the beginning of the article to remove header content, and then pick the 10 first words of the article as a prefix. We use this prefix as the prompt for the non-chat models to generate the rest of the article and prefixed it with a supporting prompt for chat models (see appendix <ref>). We let the model generate up to 200 tokens but cut the generation to have only the first 500 characters. We also cut the original articles to 500 characters to obtain the reference human samples. Using this procedure, human and generated samples are indistinguishable in length (see examples in appendix <ref>).
By using the method described above, we create 6 datasets with around 20K samples each (see appendix <ref> for the precise sizes) with a train, eval (10% of the whole size), and test split (10% of the whole size). In addition to the above procedure, we pair the AI-generated samples with their corresponding original news articles. We do that to prevent the fake and corresponding true samples from being in different training batches or data splits. We also filter out the generated news articles under 500 characters and discard the corresponding true sample to keep the dataset balanced. Finally, we also create a "round robin" dataset, which consists of a mixture of data generated by the 3 non-chat models, expected to train models more suitable for adversarial setting <cit.>.
§.§ Training the Detectors
We finetuned the pretrained models described in subsection <ref> on each non-chat model training dataset and the round-robin dataset. The detectors are trained on the full datasets (1 epoch) with an evaluation after every 200 samples seen. We save only the model that obtained the best loss on the evaluation set to avoid overfitting. We list the hyperparameters for each training procedure in appendix <ref>.
While we tested different ways of finetuning the detectors on the datasets, we only retained full finetuning for the results section, consistently with the detector training SotA recommendations. We also provide some results when only finetuning a classification head and using the adapters PEFT method in appendix <ref>. We tracked the model performance on the pretraining task to confirm we were not overfitting the base models to our distribution (cf appendix <ref>).
§.§ Testing the Detectors
We use the same metric across all the results to test the detectors: TPR (True-Positive rate). To obtain the detection prediction, we use a threshold on the output of the detector to target an FPR (False-Positive rate) of at most 5%. We find these thresholds by finding the TPR/FPR at different thresholds on the evaluation set for each dataset. We repeat this threshold-finding procedure for each detector (trained and zero-shot). This setting mimics a realistic scenario where a defender uses a detector to target a low level of false positives without access to true positive labels. We do not recompute a detection threshold on each attack since we consider these attacks unknown to the defender.
§.§.§ Testing Fast Detect GPT
To test Fast-DetectGPT <cit.>, we use the script from their GitHub repository that we slightly adapted to run similarly to the other detectors we tested. We use GPT-J 6B B as the reference model since it provides more accurate results according to their work.
§.§.§ Testing GPTZero
To test GPTZero <cit.>, we used an academic partner API access, courtesy of GPTzero, version . In order to maintain consistency with the methodology presented here, we forced the FPR of GPTZero to 5%, given that by default, it is heavily biased to minimize FPR (<0.3% in our setting).
§.§ Attacks against the Detectors
§.§.§ Evasion Attacks
In experiments targeting detector evasion, we start with the test set created in the previous experiment when generating fake articles. We regenerate the fake news articles using either a different generation parameter or a different prompt. That way, we obtain a new dataset of fake articles with the same true original articles as the ones in the previous experiment, but where the fake articles are generated differently from the one used to produce the training and test data of the previous experiment.
Changing the Generation Parameters
The generation parameters we modify are the temperature and the repetition penalty. The first attack, called "high temperature," sets the temperature to 1.2 (note that OpenAI's API allows a temperature up to 2). The second, which we call "repetition penalty," consists in setting the repetition penalty to 1.2 (interestingly, https://github.com/huggingface/chat-uihugging chat uses a repetition penalty of 1.2). Both parameters heavily impact the diversity[By diversity, we mean the entropy of the logits for the next token prediction. Higher entropy means that the range of the next word that can be generated is higher, resulting in higher text diversity.] of the produced text, making it more difficult for detection methods that rely on the lack of diversity of AI-generated texts.
Prompting Attacks
We consider a few prompts to generate the fake news articles (the complete list can be found in appendix <ref>). The prompts we choose should cover a wide set of prompts an attacker may use. The idea for our experiment is to consider only basic attacks, i.e., attacks that do not require training an additional LLM (low resource) or special LLM expertise (low skill). While we are aware that more complex attacks exist, such as those described in the background (see <cit.> for example, which uses multiple LLMs to generate text), our intent is not to test exhaustively all possible attacks (which is not feasible), but to limit ourselves to the setting of our threat model. Our results show that we do not need to dig too far to find effective attacks, which is even more concerning.
Paraphrasing Attacks
Paraphrasing a generated text with an auxiliary LLM has been reported to achieve good evasion against a variety of detectors <cit.>. Given that it preserves semantic meaning while modifying probabilities of token selection, it is likely to erase non-semantic traces of LLM generation and, as such, to be highly performant. Here, we implement a simpler version of the attack than DIPPER <cit.>, prompting an LLM to reformulate the text (cf <ref> )
§.§.§ Testing the Detectors on Human Text from a Different Distribution
Finally, we also test the detectors on 10000 samples from the Xsum dataset. Xsum comprises articles from BBC News, a different news outlet from the one we used to until now, based on CNN. XSum notably uses British English. This experiment tests the human-text generalization of detectors, so we did not create fake news articles.
§ RESULTS AND DISCUSSION
§.§ Trained Detectors
Here, we present TPRs at FPR fixed to 5% for detectors where we finetune all parameters. Results obtained with other finetuning methods can be found in the appendix <ref>.
Performance on training LLM
We present in figure <ref> the TPR obtained when testing the trained detectors on the same dataset they have been trained on.
In our experiments, Electra (Electra-Large) consistently outperforms or comes equal with the other detectors. We can also notice that the difference is small between the distilled RoBERTa version (82.8M parameters) and its large version (355M params parameters). For the next results, we will show only the results using Electra as the detector, but the results for other detectors are available in the appendix <ref>. We also note that TPR with fixed FPR provides a more fine-grained performance evaluation compared to all-around stellar ROC-AUC values in Appendix Figure <ref> and that adapter and classification head underperform compared to full finetuning (cf. appendix <ref>).
Generalization across training LLMs
We show on the heatmap in figure <ref> the TPR of the trained detectors on the test of the different datasets generated by the LLMs, evaluating finetuned detector generalization across generating LLMs.
Surprisingly, most detectors perform almost equally, if not better in some cases, when detecting fake news articles generated by a different LLM than the one used to train them. This suggests a good generalization of the patterns leveraged by the finetuned detectors and a sufficient similarity in LLMs output in our setting.
Generalization to unseen chat LLMs
In this experiment, we repeat the same testing as above, but on 3 datasets generated by chat models that were not used to generate any training datasets (see appendix <ref> for the prompt). The results are in the same heatmap as for the previous experiment: figure <ref>.
We find that the TPRs obtained here align with the finding above that the trained detectors surprisingly generalize enough to achieve similar TPR across data generated by instruction-tuned LLMs. We find that news articles generated by Zephyr are slightly harder to detect, whereas the ones generated by Llama 3-Instruct-8B are the easiest. This highlights that while we find encouraging generalization results across LLMs, the performance across LLMs remains disparate, consistently with prior findings <cit.>.
§.§ Cross-LLM Generalization
We present in figure <ref> the results of the same experiment as above, but including zero-shot detectors, which we compare to one of the trained detectors. We observe that Fast-DetectGPT and GPTZero closely match but still underperform compared to Electra-Large trained to detect Mistral-generated texts (Electra_Mistral). We also observe that RoBERTa-OpenAI significantly underperforms, suggesting that it is now outdated and strongly arguing for abandoning it as a zero-shot general LLM detector.
Again, we find Zephyr LLM particularly challenging for all detectors and, going forward, will focus on it to best differentiate model performance. Curiously, we also observe that GPTZero drastically under-performs on non-chat models, consistently with previous benchmarks <cit.>, suggesting an overfitting to chat models.
§.§ Evasion Attack Reslience
Here, we focus on the best-performing trained detector (Electra_Mistral) and the most challenging generator - Zephyr. We also include a detector trained on the mixture of data from all generators in a round-robin fashion (Electra_RR), which we expect to generalize well. We also include the previously mentioned zero-shot detectors. Following the method described in the Methodology (<ref>), we generate evasion attack datasets and present TPR (true-positive rate) at a fixed FPR of 5% in the table <ref>, as described in <ref>.
Changing the Generation Parameters
We observe that both generation parameters attacks against generation parameters are highly effective, with a modest temperature increase effectively defeating all zero-shot detectors. This is highly concerning, given that this is a trivial attack and has been described for over 5 years by the time of publication of this article <cit.>.
We hypothesize that by increasing the temperature and repetition penalty, we increase the LLM output diversity, which defeats LLM detectors based on text perplexity and expecting generated texts to be less diverse than human-written ones. We hypothesize that the poor performance of detectors against the temperature attack is due to its absence from any recent LLM detector benchmarks.
Prompting the LLM
For the prompting attacks, we discover a high variation in terms of both the average effectiveness of the attacks and model-specific attack effectiveness, with none achieving universal effectiveness. The "News prompt" attack seems to be largely ineffective, while the "Example prompt" is only effective against GPTZero, and the "Tweet Prompt" attack is effective against all detectors but GPTZero, whose performance instead drastically increases.
We hypothesize that this variation is a result of a combination of factors, including the selection of reference human/machine datasets by detector developers and the domain-specific proficiency of generators. We believe this highlights the critical need for diverse testing of detectors, both domain-specific and threat-specific, and for exploring a wide range of prompting strategies.
Paraphrasing
We observed that the paraphrasing attack was only effective against Electra_Mistral and Fast-DetectGP, with the latter losing 40% TPR. This is somewhat unexpected, given prior reports of the effectiveness of this attack, although potentially due to using a prompted LLM reformulation rather than the DIPPER attack <cit.>.
Prior Benchmark comparison
A direct comparison to prior benchmarks is non-trivial. Given the drastic difference in performance of different prompting strategies, minor differences in benchmark implementation matter. In our case, the only benchmark providing sufficiently detailed results is RAID <cit.>, thanks to its leaderboard. Specifically, we would expect the results for their "News" domain (BBC articles), generated by the Mistral-chat model, to be close to our setting of news-like content (CNN articles trimmed to 500 characters), generated by Mistral-based Zephyr.
Unfortunately, this is not the case. Even in the absence of attacks, RAID suggests RoBERTa-Base-OpenAI-GPT2 matches FastDetectGPT (0.99 TPR for both), whereas we observe a drastically lower and heterogeneous performance (0.57 and 0.86 TPR respectively). A repetition penalty attack, sharing the same parameters for both our and RAID benchmarks leads to a similar difference (0.5 and 0.4 TPR in RAID, respectively, vs. 0.26 and 0.24 TPR for us). Likewise, for the paraphrasing attacks, RAID predicts Fast-DetectGPT to remain highly effective (0.95 TPR), while we observe its performance halved (0.45 TPR), which is even more surprising given that we used a weaker paraphrasing attack.
This means that domain-specific and threat model-specific benchmarking is essential to judge the real-world performance of LLM detectors. In our opinion, this argues against the utility of large-scale benchmarks with once-generated static data and in favor of dynamic, application-specific benchmarks.
§.§ Performance on Unseen Human Texts
In the previous subsection, we showed that while a moderately sophisticated attacker could defeat all detectors, one of our custom-trained detectors - Electra_RR - exhibited an outstanding resilience to evasion attacks, frequently presenting the best TPR, and if not - the second best, with a usable and consistent 0.84 TPR, arguing in favor of Round-Robin training utility, previously seen in GANs <cit.>. Such resilience is surprising, given it was not trained against adversarial evasion and did not generalize particularly well across generative LLMs, and could lead us to suppose that Electra_RR would perform well in new evasion attacks, hence claiming a new SotA.
However, such a claim would be misplaced. In real-world deployments, LLM detectors must not only generalize across unseen generators and attacks but also across unseen human texts, maintaining an acceptable FPR across different types of texts. We test for this in table <ref>, verifying the generalization from the CNN News dataset to the BBC News Xsum dataset, which are closely related and differ most likely only by US English vs British English usage. Despite such close relatedness, we observe that Electra_RR fails dramatically in human text generalization along with Electra_Mistral, indicating that neither of our trained detectors can be deployed to the real world.
Please note that GPTZero FPR has been forced to 5% here. With default configurations, the accuracy is over 99.7% on both datasets.
Unfortunately, such tests for out-of-distribution human texts are all but absent for LLM detector benchmarks, with only anecdotic reports of real-world performance failure <cit.> raising awareness of this failure mode. Unlike what we could expect from prior works showing generalization of LLM detector across languages <cit.>, and an effective underlying pretrained model to our classifier, the human text generalization cannot be assumed and must be tested in any LLM detector benchmarks for them to be useful.
§ CONCLUSION
In this paper, we developed a rigorous framework to benchmark LLM detectors in a way specific to a domain and threat model. By applying it to a setting relevant to LLM-augmented information operations, we show that current LLM detectors are not ready for real-world use due to a combination of susceptibility to trivial evasion attacks, notably generation temperature increase, and potentially unacceptable FPRs in practice, consistently with noted issues in other domains <cit.>. Overall, we believe our results argue in favor of alternative approaches in that context, e.g., coordinated activity patterns search <cit.>.
In the process, we noted several methodological issues with existing LLM detector benchmarks, notably missing evaluation of FPR on out-of-distribution human texts, incomplete coverage of evasion attacks, and the failure of large-scale benchmarks to predict detector performance in even closely related settings. We believe this argues in favor of switching away from large-scale static benchmarks to dynamic benchmarks that can be run against a specific domain and threat model. To allow that, we provide the dynamic benchmarking suite we developed to the scientific community under the MIT license (<https://github.com/Reliable-Information-Lab-HEVS/dynamic_llm_detector_benchmark>), which we designed to be fully domain and language transferable for non-expert teams, and expandable with new attacks for more technical teams.
§ LIMITATIONS
While we focus on a setting highly relevant to in-the-wild LLM text recognition, our work has several limitations.
First, we focus on a short-story setting and use a well-known NLP dataset as a reference human text. Due to NLP dataset reuse for model training, the performance of the generative task is likely to be higher than that of actual information operations. Similarly, social media accounts without posting history are rare, and puppet accounts are likely to have similarly LLM-generated past posts. Such longer texts could increase confidence that an account is unauthentic. However, misappropriation of social media accounts for information operations is common, and such unauthentic post history is not guaranteed.
Second, we have not investigated text watermarking approaches. If future on-device LLM releases more tightly integrated models and supporting code and enables watermarking, it could potentially improve the detectability of such posts. While not impossible, recent work by <cit.> suggests that reformulation attacks remain effective in this setting, warranting caution as to their effectiveness.
Third, we only tested a limited number of zero-shot detectors. While it is possible that untested detectors could clear both adversarial evasion and generalization tests, it is unlikely. <cit.> and <cit.> performed extensive benchmarking of recent zero-shot detectors, and both Fast-DetectGPT and GPTZero closely match other solutions across a large palette of tests, including adversarial perturbations.
Fourth, we assume the attacker has limited capabilities, notably lacking adversarial evasive model fine-tuning. This assumption is somewhat brittle, given that parameter-efficient fine-tuning (PEFT) requires minimal resource overhead compared to traditional fine-tuning. PEFT can be performed on quantized models from relatively small datasets and hardware adapted for quantized inference, putting them within the reach of moderately sophisticated attackers we consider. While this opens a new type of attack, current LLM detectors are easy enough to fool even without it.
Finally, our work focused on the English language, for which extensive resources are available and are leveraged by LLM developers. While our results could generalize to other high-resource Romance Languages such as French or Spanish, LLM performance in other languages, especially low-resource ones, is unlikely to induce native speakers into confusion, making the LLM disinformation detection a significantly less salient problem as of now.
§ ETHICS STATEMENT
While the issue of deep neural disinformation is critical and a central concern for malicious misuse of LLMs, it has not prevented the release of powerful SotA models to the general public that can readily assist in such operations. Here, we do not present any novel attacks but demonstrate that existent ones are sufficient to evade SotA attacks. As such, we do not expect novel risks to arise from this work but rather to improve the general awareness of common tools' limitations and contribute to mitigating known risks arising from LLMs.
GPTZero is a security solution numerous entities use to detect LLM-generated text in potentially safety-critical contexts. Given that in this work, we found several highly effective attacks against it, we performed a coordinated vulnerability disclosure with them.
We used 1 A100 GPU on a local cluster to generate the datasets, 30 minutes per dataset. We used 1 A100 GPU to train the models and perform detection using Fast-DetectGPT. For testing, we only used V100 GPUs thanks to our focus on smaller models. In total, we used the local cluster for 30 hours of GPU-days on the A100 and 20 hours of GPU-days on the V100 (numbers including hyperparameter search and testing correctness), leading to total emissions of 5.4kg of CO2. No crowdsourced labor was used in this work. LLM assistants were used for minor stylistic and grammatical corrections of the final manuscript, consistently with ACL recommendations. GitHub Copilot has been used to assist in coding with auto-completion but no script or algorithm has been fully generated with it.
acl_natbib
§ DATASETS AND DATA GENERATION DETAILS
§.§ Datasets List
We created 6 datasets with a balanced number of fake and true samples (1 dataset per generator). See table <ref> for the list of generators used for the datasets and the precise size of the datasets. The datasets are split with 80% for training, 10% for eval (choosing the best model to save), and 10% for testing. Before cutting the samples to the 500 characters threshold, we filtered out samples smaller than 500 characters to only have samples of 500 characters. A small improvement here would be to use the "min_new_token" generation parameter so that we would not need to filter out smaller samples. We only used this parameter to generate the adversarial datasets. The samples of the true articles are the same across all datasets except for the discarded samples.
There is also a 4th auto-complete model dataset, "round-robin," which is a mixture of 2500 samples from each of the other complete models' (Phi-2, Gemma, and Mistral) datasets.
For the adversarial datasets, we reuse these same datasets, but we generate the fake samples with a different prompt or generation parameter (see appendix <ref> for the adversarial prompts). In total, there is one adversarial version of each dataset per attack (i.e. number of datasets times number of attacks adversarial datasets).
§.§ Prompts for Generating Data
For the auto-complete models, we only use the prefix as the prompt. For chat models, we use the prompt in table <ref>. Also, we force the first tokens of the output to be the prefix for that particular sample. This prevents the chat model from generating typical assistant-starting messages that would be too obvious to spot for the detectors (in practice, an attacker could also remove it easily). For Gemma-2B-it (Gemma chat), there is no system prompt in the chat template; we simply dropped the system prompt for that case.
§.§ Dataset Example
Below is an example of a true and an LLM-generated news article from the Zephyr dataset. As explained in the methodology, samples are regrouped by pairs of true and fake samples (ordered randomly within the pair). The true and fake samples in the pair start with the same 10-word prefix.
True sample:
"Former Vice President Dick Cheney on Sunday defended the Bush administration's economic record, the invasion of Iraq and the treatment of suspected terrorists, warning that reversing its anti-terrorism policies endangers Americans. "We've accomplished nearly everything we set out to do," ex-Vice President Dick Cheney says Sunday about Iraq. In a wide-ranging interview with CNN's "State of the Union," Cheney said the harsh interrogations of suspects and the use of warrantless electronic surveilla"
source: <https://edition.cnn.com/2009/POLITICS/03/15/cheney.interview/>
Fake sample:
"Former Vice President Dick Cheney on Sunday defended the Bush administration's use of enhanced interrogation techniques, commonly referred to as torture, in the aftermath of the September 11 attacks. Speaking at the Reagan Library's National Security Forum in Simi Valley, California, Cheney argued that the use of waterboarding and other techniques were crucial in obtaining valuable intelligence and preventing further attacks on American soil. "We did learn a lot," Cheney said. "We learned, for e"
source: <https://huggingface.co./HuggingFaceH4/zephyr-7b-beta>
§.§ Attack Prompts
The idea behind the attacks we crafted to generate news articles evading the detectors is to generate news articles with a distribution of words looking more like the CNN news articles that were used to train the detectors. This is particularly true for the "news prompt" attack that asks the model to generate a CNN news-looking article. The same applies to the "example prompt," which uses in-context learning to generate a more CNN news-looking article. The news prompt can be found below in <ref> and the example prompt in <ref>.
For the "tweet prompt," the idea is to generate text with a different distribution than news articles, which might confuse detectors trained on news articles. The "paraphrasing prompt" has a similar effect of modifying the distribution of words to make it more diverse than the original output. The tweet prompt can be found in <ref> and the other paraphrasing prompt in <ref>.
§ TRAINING DETAILS
§.§ Hyperparameters for Training
We present in table <ref> the hyperparameters used for training. The training was done either on the Nvidia A100 40GB or on the Nvidia V100 16GB, depending on the batch size, model size, and training method. All the trainings were done with 1 epoch. While we used only the models trained with full finetuning on the main part of the paper, we provide some results with different training methods in appendix <ref>. The hyperparameters were chosen according to the hyperparameters used in the original papers of the different models with some adaptation according to the size of the models and testing that the training converges. A linear schedule with a 10% warmup was applied to the learning rate.
§ FULL HEATMAP FOR TRAINED DETECTORS
You can find in figure <ref> the TPR obtained for all trained detectors when testing them on all the datasets (it is a more complete version of figure <ref>). This plot uses the same metric as the main paper plots, i.e. we find thresholds on the eval set such that we target an FPR of at most 5 %.
§ ROC AUC SCORE WITH DIFFERENT TRAINING METHODS
While we mainly tested trained detectors with full finetuning, we also tested and compared the results using different training methods. First, we tested freezing the LLM detector model's parameters and finetuning the classifier head (freeze base). Secondly, we tested finetuning the base LLM detector with the adapters PEFT method (see <cit.>). The results we show in this section are the same experiment as presented above in appendix <ref> and in figure <ref>. However, we decide to use ROC-AUC as the metric to have a threshold-independent metric, avoiding finding a suitable threshold for each case. Also, we compare this ROC-AUC score obtained with freeze base (figure <ref>) and adapters (figure <ref>) method to the full-finetuning ROC-AUC (figure <ref>).
We find that training with the adapters method yields results similar to training with full-finetuning, as expected. This gives an interesting alternative to train detectors.
§.§ Training with Finetuning Only the Classification Head
See figure <ref>.
§.§ Training Adapter Method
See figure <ref>.
§.§ Training with Full Finetuning
See figure <ref>.
§ CHECKING DETECTOR DEGRADATION ON MLM TASK
To compare the effect of different finetuning methods on the trained detectors, we evaluate the loss on the MLM (Masked Language Model) task before and during training. In particular, we use samples from <https://huggingface.co./datasets/Polyglot-or-Not/Fact-Completion?row=0>, a dataset of knowledge facts where the task is to fill the gap. This enables us to test if the base models "forgotten facts" during the finetuning. This is shown in figure <ref> with the degradation loss that shows how the loss on this MLM task evolves during training. We also provide a baseline where the gap is filled by a model without pre-trained weights. As shown in the figure, the degradation increases at the beginning of training and then stays almost constant. By comparing with figure <ref>, the eval loss stops improving at the same moment the degradation loss stops increasing, showing signs of convergence. Also, we noticed that the degradation was slightly lower when finetuning with the adapters using the PEFT method. However, the degradation difference between the two training methods might be higher with different tasks and training hyperparameters.
§ FULL ATTACK TESTING RESULTS
We provide here the same results as in <ref>,
but with data also generated by Gemma-2b-it (Gemma-Chat) and Llama-3 Instruct (7B). As shown in table <ref>, attacks using Gemma-Chat are less effective. We find that the attacks are often on par with the effectiveness using Zephyr, sometimes above, sometimes below (in terms of TPR difference).
§ MODELS, DATASETS AND THIRD-PARTY CODE
§ OLD TABLES (TO BE REMOVED)
2*Attack 2*Detector 5cDatasetl
Gemma Chat Zephyr Avg.
5*High temperature
Roberta_Phi-2 0.9912* 0.9268
Roberta_Phi-2 0.9890 0.9721
(Diff) -0.0021 0.0452
2-7
Fast-DetectGPT (No attack) 0.7365 0.8713
Fast-DetectGPT (With attack) 0.8886
(Diff) 0.1521 0.0495
|
http://arxiv.org/abs/2409.03175v1 | 20240905020830 | Data-based approaches to learning and control by similarity between heterogeneous systems | [
"Chenchao Wang",
"Deyuan Meng"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
[footnoteinfo]The material in this paper was not presented at any conference.
Affi1,Affi3]Chenchao [email protected],
Affi1,Affi2,Affi3]Deyuan [email protected]
[Affi1]School of Automation Science and Electrical Engineering, Beihang University (BUAA), Beijing 100191, PR China
[Affi2]State Key Laboratory of CNS/ATM, Beijing 100191, PR China
[Affi3]The Seventh Research Division, Beihang University (BUAA), Beijing 100191, PR China
Similarity; similarity indexes; admissible behavior; sampled data; similarity-based learning.
§ ABSTRACT
This paper proposes basic definitions of similarity and similarity indexes between admissible behaviors of heterogeneous host and guest systems and further presents a similarity-based learning control framework by exploiting offline sampled data. By exploring helpful geometric properties of the admissible behavior and decomposing it into the subspace and offset components, the similarity indexes between two admissible behaviors are defined as the principal angles between their corresponding subspace components. By reconstructing the admissible behaviors leveraging sampled data, an efficient strategy for calculating the similarity indexes is developed, based on which a similarity-based learning control framework is proposed. It is shown that the host system can directly accomplish the same control tasks by utilizing the successful experience from the guest system, without having to undergo the trial-and-error process.
§ INTRODUCTION
Learning-based control, as one of the most promising fields within control community, has attracted significant attention and popularity. Learning-based control takes direct inspirations from human's learning process (<cit.>). When individuals endeavor to acquire new skills, they repetitively engage in specific tasks and gather experience from past failures, ensuring their ability to better accomplish the same tasks in the future.
In a similar manner, dynamical systems can also recursively benefit from the past and correct the control errors, with the guarantee of the enhanced control performances (<cit.>). Such learning-based control mechanisms that learn from one's own past experience have been extensively investigated, and some well-established control frameworks have been presented (see, e.g., <cit.>). All of these aforementioned control frameworks, which either design controllers or adjust adaptive parameters based on past experience to rectify control errors, have found widespread and successful applications in real-world industrial systems (see, e.g., <cit.>).
Another characteristic of the human learning process entails its inherent strong interactivity, based on which a novice can efficiently acquire new skills through learning from the advanced experience of some skilled experts. Just as human inevitably require to interact with others to achieve common goals, the collaborative learning of multiple dynamic systems to achieve some unified objective is an essential and extensively discussed topic (see, e.g., <cit.>). A simple example may be the leader-follower formation problem in multi-agent systems (<cit.>), which has played a significant role in numerous engineering applications such as unmanned aerial vehicles (<cit.>), autonomous mobile robots (<cit.>), and high-speed trains (<cit.>). To ensure the achievement of the unified objective among the multiple systems, the idea of leveraging experience generated by other systems is extensively adopted, based on which one remarkable milestone is the distributed learning control (<cit.>). Nevertheless, the existing learning-based control strategies somewhat exhibit weaknesses in the following two aspects:
W1) Existing learning-based control strategies simply collect experience from neighbors based on specific communication topology, failing to quantitively assess that which system's experience is more beneficial;
W2) Existing learning-based control strategies directly employ the (weighted) relative information among systems to guarantee consensus, without fully exploiting the potential of the experience from other systems.
Therefore, in the scenarios where the host system is equipped with multiple external experience generated by guest systems, it is meaningful and urgent to develop an innovative learning-based control framework for the host system. A practical scenario is where the leader vehicle can proceed along a specified path through N iterations of trial-and-error (<cit.>). We need to proposed a learning-based control framework that allows the follower vehicle to directly follow the leader's path without undergoing the trial-and-error process. This framework is required to address both shortcomings mentioned earlier in existing learning-based control strategies.
Simultaneously, with the advancements in computer science and storage technology, employing the sampled data generated during the system operation for control objectives has become increasingly reliable and convenient. There have been several results that presented learning-based control strategies within a data-driven framework (see, e.g., <cit.>, <cit.>). However, these results pay few attention to exploiting the successful experience of guest systems.
Motivated by aforementioned discussions, this paper is devoted to proposing the definitions of similarity and similarity indexes to address W1). Afterward, by exploiting the sampled data, a similarity-based learning control framework is developed to address W2), which focuses on how to efficiently learn from the successful experience of guest systems even in the absence of model information. The mechanism of the similarity-based learning control framework is depicted in Fig. <ref>.
Main contributions of this paper can be summarized as follows.
C1) We innovatively propose the basic definitions of similarity and similarity indexes between two admissible behaviors, which can qualitatively and quantitatively measure the benifits of guest system's successful experience of to the control of the host system, respectively;
C2) By designing offline input-output testing principles for linear time-varying (LTV) systems and exploiting the collected I/O data, we develop a data-based criterion for verifying the similarity and present a efficient data-based strategy for calculating the similarity indexes;
C3) By leveraging the calculated similarity indexes and exploiting helpful projection techniques, we establish a similarity-based learning control framework from the offline sampled data. As a result, this framework allows the host system to directly leverage the successful experience of the guest system to accomplish the unified tasks, without resorting to other learning-based control strategies.
The rest of this paper is organized as follows. We present the preliminaries on admissible behavior of LTV systems and formulate the similarity-based learning control problems in Section <ref>. In Section <ref>, we design the offline input-output test principles and reconstruct the admissible behaviors from the sampled data. Afterward, we introduce the definitions of similarity and similarity indexes, and develop a data-based criterion for verifying the similarity and a data-based strategy for efficiently calculating the similarity indexes in section <ref>. In Section <ref>, by exploiting the calculated similarity indexes and projection techniques, the similarity-based learning control framework is presented exploiting the sampled data. Finally, Section <ref> provides illstrative simulations, and Section <ref> summarizes the contributions in this paper.
Notations: Let ℤ_N={0,1,⋯,N} and ℤ_+={0,1,2,⋯}. Let ℝ be the set involving all real numbers, and ℝ^n involves all n-dimensional real vectors whose entries locate in ℝ. For any matrix A, its transpose and kernel space are denoted as A^ T and ker(A), respectively. The linear space spanned by the columns of A is denoted as span(A). For arbitrary vectors a,b∈ℝ^n, the standard inner product ⟨ a,b⟩ refers to a^ Tb, and the induced norm is correspondingly defined as a=√(⟨ a,a ⟩). The identity and null matrices with appropriate dimensions are denoted as I and 0, respectively. Given s_1,s_2,⋯,s_n∈ℝ, the symbol diag(s_1,s_2,⋯,s_n) represents the diagonal matrix whose diagonal entries are s_1,s_2,⋯,s_n.
§ ADMISSIBLE BEHAVIOR AND PROBLEM STATEMENT
The preliminaries of admissible behavior are firstly introduced. We consider two unknown heterogeneous LTV systems whose dynamics within the time duration 𝕋 are represented as
Σ_i,𝕋:{
x_i(t+1) =A_i(t)x_i(t)+B_i(t)u_i(t)
y_i(t) =C_i(t)x_i(t)+D_i(t)u_i(t)
., t∈𝕋, i∈{1,2}.
It is worth mentioning that the results proposed in this paper can be implemented equally to the scenarios where D_i(t)≡0 for all t∈𝕋, and the introduction of D_i(t) is solely for a generalized expression.
Here, the subscripts i=1 and i=2 refer to the host system and guest system, respectively, and the host system Σ_1,𝕋 can acquire experience from the guest system Σ_2,𝕋. Without loss of generality, the time duration is assumed to be 𝕋:=ℤ_T-1. The input and output are denoted as u_i(t)∈ℝ^n_u and y_i(t)∈ℝ^n_y, respectively. The internal state with unknown dimension is denoted as x_i(t)∈ℝ^∙, and the unknown time-varying model matrices with appropriate dimension is represented by {A_i(t),B_i(t),C_i(t),D_i(t)}. In order to investigate the input-output relationship over the entire time duration 𝕋, the following supervectors
𝐮_i =[ u_i^ T(0), u_i^ T(1), ⋯, u_i^ T(T-1) ]^ T,
𝐲_i =[ y_i^ T(0), y_i^ T(1), ⋯, y_i^ T(T-1) ]^ T,
𝐱_i =[ x_i^ T(0), x_i^ T(1), ⋯, x_i^ T(T-1) ]^ T
are introduced. For a vector w_i=col(𝐮_i,𝐲_i)∈ℝ^n_wT where n_w=n_u+n_y, if there exists some (may be non-unique) state supervector 𝐱_i such that (𝐮_i,𝐲_i,𝐱_i) satisfies (<ref>), then w_i is called as a T-length trajectory of Σ_i,𝕋. To capture the input-output transfer characteristics, the behavior of Σ_i,𝕋, denoted by ℬ_i, is defined as the set involving all T-length trajectories
ℬ_i={w_i∈ℝ^n_wT|∃𝐱_i such that (𝐮_i,𝐲_i,𝐱_i) satisfies (<ref>).}.
It is worth mentioning that the above definition only focuses on the input-output transfer characteristics, but neglects the initially stored energy in the system Σ_i,𝕋, which can be characterized by x_i(0), and its influence on the system response. Without loss of generality, in this paper, we assume that the initial state of Σ_i,𝕋 is x_i(0)=x_i. Due to this consideration, we introduce a class of T-length admissible trajectories, denoted by w_i,x_i, which refer to those T-length trajectories who start from the initial state x_i(0)=x_i. Correspondingly, the admissible behavior is defined as follows.
For the LTV system Σ_i,𝕋, its admissible behavior under the initial state x_i(0)=x_i, denoted by ℬ_i,x_i, is defined as the set involving all T-length admissible trajectories, i.e.,
ℬ_i,x_i={w_i,x_i∈ℝ^n_wT|w_i,x_i∈ℬ_i and x_i(0)=x_0.}
From the above definition, the admissible behavior is essentially a subset of the behavior, i.e., ℬ_i,x_i∈ℬ_i, since it is subject to extra constraints with respect to the initial energy. Admissible behavior also has specific engineering implications since practical systems always commence with some initially stored energy, and a direct example may be the RLC circuits where the the initial charge of the capacitor has an impact on the system response (see, e.g., <cit.>).
Based on the aforementioned preliminaries, we formulate the to-be-addressed problems in this paper as follows.
For the unknown host system Σ_1,𝕋 with initial state x_1(0)=x_1 and unknown guest system Σ_2,𝕋 with initial state x_2(0)=x_2, let their admissible behaviors be denoted as ℬ_1,x_1 and ℬ_2,x_2, respectively. This paper focuses on dealing with the following problems:
P1) Appropriate offline input-output test principles need to be designed, under which the admissible behaviors ℬ_1,x_1 and ℬ_2,x_2 can be reconstructed by exploiting the sampled data;
P2) The definitions of similarity and similarity indexes between two admissible behaviors ℬ_1,x_1 and ℬ_2,x_2 need to be introduced. Moreover, a data-based criterion for verifying the similarity and a data-based strategy for calculating the similarity indexes need to be developed;
P3) Suppose that the guest system Σ_2,𝕋 has accomplished its task by leveraging some powerful learning-based control strategies and achieved the desired trajectory w_g∈ℬ_2,x_2. A similarity-based learning control framework for the host system Σ_1,𝕋 needs to be proposed such that it can accomplish the same control task by exploiting the successful experience of Σ_2,𝕋 and the sampled data. As a result, we will find a solution w_h∈ℬ_1,x_1 such that the difference ‖ w_g-w_h‖ is minimized.
§ DATA-BASED VERIFICATION FOR SIMILARITY AND SIMILARITY INDEXES
§.§ Data-based reconstruction for admissible behaviors
Compared to the system matrices {A_i(t),B_i(t),C_i(t),D_i(t)}, the admissible behavior ℬ_i,x_i accurately captures the input-output transfer characteristics of the system Σ_i,𝕋, without involving the non-unique internal states.
Owing to the absence of model knowledge, the admissible behaviors ℬ_1,x_1 and ℬ_2,x_2 need to be identified by exploiting the sampled data. Before designing appropriate offline input-output test principles and collecting offline I/O data, several helpful geometric properties of the admissible behavior are explored.
For the LTV system Σ_i,𝕋, its admissible behavior ℬ_i,x_i constitutes an affine set. Moreover, if the initial state is x_i=0, the admissible behavior ℬ_i,0 is a subspace.
In order to investigate the relationship among the input, initial state, and output over the entire time duration 𝕋, we introduce the input-output transfer matrix G_i and and initial state-output transfer matrix L_i, both of which can be steadily constructed by employing the model matrices {A_i(t),B_i(t),C_i(t),D_i(t)} (see, e.g., <cit.>). Then by leveraging the supervectors in (<ref>), it is derived that
𝐲_i=G_i𝐮_i+L_ix_i(0).
Under the given initial state x_i(0)=x_i, any admissible trajectory of Σ_i,𝕋 must be the solution of the non-homogeneous linear algebraic equation (LAE) described by
[ -G_i, I ]w_i,x_i=L_ix_i.
By exploiting the properties of solutions to non-homogeneous LAEs, it directly follows that ℬ_i,x_i constitutes an affine set. Particularly, once the initial state is set as x_i=0, the admissible trajectory w_i,0 must locate within ker([ -G_i, I ]), and the admissible behavior ℬ_i,0 is exactly the subspace ker([ -G_i, I ]) in this case.
Through exploring the geometric properties of the admissible behavior, Lemma <ref> reveals a helpful fact that the affine combination of two admissible trajectories remains an admissible behavior under the same initial state. To be specific, for two admissible trajectories w'_i,x_i∈ℬ_i,x_i and w”_i,x_i∈ℬ_i,x_i, their affine combination, defined as
α w'_i,x_i+(1-α)w”_i,x_i, ∀α∈ℝ
is still an admissible trajectory in ℬ_i,x_i. Inspired by this fact, if a sufficient number of representative admissible trajectories are collected via I/O tests, then the entire admissible behavior ℬ_i,x_i can be reconstructed through affine combinations of these admissible trajectories. This provides theoretical support for recovering the admissible behavior based on sampled data.
As emphasized in Remark <ref>, the admissible bahaviors can be recovered via the affine combination of a sufficient number of representative admissible trajectories even in the absence of model knowledge. Therefore, it is necessary to design the appropriate offline input-output test principles to ensure the collection of required admissible trajectories. The following strategy can provide an alternative data-based representation for the admissible behavior. For the systems Σ_i,𝕋 where i∈{1,2}, at least n_uT+1 times I/O tests need to be conducted. In each I/O test, the system Σ_i,𝕋 starts from an unknown but fixed initial state x_i, and the k-th test input over the entire time duration 𝕋 is denoted as 𝐮_i^k∈ℝ^n_uT. Correspondingly, the k-th test output over 𝕋 is denoted as 𝐲_i^k∈ℝ^n_yT, and all test input/output data are collected as
U^Test_i =[ 𝐮^0_i, 𝐮^1_i, ⋯, 𝐮^n_uT_i ]∈ℝ^n_uT×(n_uT+1),
Y^Test_i =[ 𝐲^0_i, 𝐲^1_i, ⋯, 𝐮^n_uT_i ]∈ℝ^n_yT×(n_uT+1).
To obatin a sufficient number of representative admissible trajectories, the test inputs need to be specifically designed.
For the LTV system Σ_i,𝕋, the following offline test principles need to be conducted:
* In the initial I/O test, the test input is designed as
𝐮^0_i=[ 0^ T_n_u, 0^ T_n_u, ⋯, 0^ T_n_u ]^ T∈ℝ^n_uT;
* In the later n_uT I/O tests, the test inputs are designed to satisfy the following rank condition
rank([ 𝐮^1_i, 𝐮^2_i, ⋯, 𝐮^n_uT_i ])=n_uT.
By leveraging the sampled data collected building upon the above test principles, we can construct a data-based representation to reconstruct the admissible behavior.
For the LTV system Σ_i,𝕋, let the sampled I/O data (U_i^Test,Y_i^Test) satisfy the offline test principles (<ref>) and (<ref>). A vector w_i,x_i=col(𝐮_i,𝐲_i)∈ℬ_i,x_i if and only if there exists some g_i∈ℝ^n_uT+1 such that
[ 1^ T_n_uT+1; U_i^Test; Y_i^Test ]g_i=[ 1; 𝐮_i; 𝐲_i ].
Sufficiency: From the designed offline I/O test principles (<ref>) and (<ref>), it can be concluded that col(𝐮^j_i,𝐲^j_i)∈ℬ_i,x_i holds for all j∈ℤ_n_uT and i∈{1,2}, or equivalently, is essentially an admissible trajectory. The first equation in (<ref>) actually indicates that [ (U^Test_i)^ T, (Y^Test_i)^ T ]^ Tg_i represents the affine combination of the columns of the data matrix [ (U^Test_i)^ T, (Y^Test_i)^ T ]^ T. From Lemma <ref>, the affine combination of admissible trajectories remains an admissible trajectory. Therefore, w_i,x_i=col(𝐮_i,𝐲_i)∈ℬ_i,x_i if (<ref>) holds.
Necessity: From the designed offline test principle (<ref>), the test inputs {𝐮^1_i, 𝐮^2_i, ⋯, 𝐮^n_uT_i } form a set of bases of ℝ^n_uT. That implies that, for arbitrary input 𝐮_i∈ℝ^n_uT, there must exist a series of real numbers g_i,k∈ℝ, k∈ℤ_n_uT such that
𝐮_i=∑_k=1^n_uTg_i,k𝐮^k_i+g_i,0𝐮_i^0
where 𝐮^0_i=0_n_uT is defined in test principle (<ref>).
By designing g_i,0=1-∑_k=1^n_uTg_i,k and leveraging Lemma <ref>, it directly follows that the corresponding output satisfies
𝐲_i =∑_k=1^n_uT(g_i,k𝐲^k_i-g_i,kL_ix_i)+g_i.0𝐲_i^0-g_i,0L_ix_i+L_ix_i
=∑_k=1^n_uTg_i,k𝐲^k_i+g_i,0𝐲_i^0.
Therefore, for arbitrary w_i,x_i∈ℬ_i,x_i, there always exists a vector g_i=[ g_i,0, g_i,1, ⋯, g_i,n_uT ]^ T such that (<ref>) holds.
Following the established data-based representation (<ref>) for the admissible behavior, some helpful geometric properties of the admissible behavior can be further explored. Let w_i^k=col(𝐮_i^k,𝐲_i^k) where k∈ℤ_n_uT be the k-th test admissible trajectory. By leveraging the offline I/O data, the admissible behavior can be decomposed into the sum of subspace and offset components as
ℬ_i,x_i =𝒲_i+w_i^0,
𝒲_i =span(w_i^1-w_i^0,w_i^2-w_i^0,⋯,w_i^n_uT-w_i^0).
From the offline test principles (<ref>) and (<ref>), 𝒲_i is a subspace in Euclidean space ℝ^n_wT and is of dimension n_uT, which is exactly the number of free input channels (see, e.g., <cit.>).
For simplicity, we define
H_i=[ α_i^1, α_i^2, ⋯, α_i^n_uT ]
where the vectors {α_i^1, α_i^2, ⋯, α_i^n_uT} form a set of unit orthogonal bases of 𝒲_i, such that the subspace component can be simply expressed as 𝒲_i=span(H_i). Additionally, H_i can be readily obtained through the Gram–Schmidt process from the sampled I/O data (U_i^Test,Y_i^Test).
To further explore the geometric properties between two affine set, we present the definition of principal angles between two subspaces.
(<cit.>) For two subspaces ℛ_1⊂ℝ^n and ℛ_2⊂ℝ^n with dim(ℛ_1)=dim(ℛ_2)=m, m<n, the principal angles
Θ(ℛ_1,ℛ_2)=[ θ_1,θ_2,⋯,θ_m ], θ_k∈[0,π/2], k∈ℤ_m\{0}
between ℛ_1 and ℛ_2 are recursively defined as
s_k=cos(θ_k)=max_x∈ℛ_1max_y∈ℛ_2⟨ x,y⟩=⟨ x_k,y_k⟩
subject to
x=y=1, ⟨ x,x_i⟩=0, ⟨ y,y_i⟩=0, i∈ℤ_k-1\{0}
Moreover, the vectors {x_1,x_2,⋯,x_m} and {y_1,y_2,⋯,y_m} are called the principal angles associated with ℛ_1 and ℛ_2.
§.§ Similarity and similarity indexes
Regarding the host system Σ_1,𝕋 with x_1(0)=x_1 and the guest system Σ_2,𝕋 with x_2(0)=x_2, this subsection aims at introducing the definitions of similarity and similarity indexes. Moreover, by leveraging the collected offline I/O data, a data-based criterion for verifying the similarity and a data-based strategy for calculating the similarity indexes are developed. The definition of similarity between ℬ_1,x_1 and ℬ_2,x_2 is presented as follows.
The admissible behaviors ℬ_1,x_1 and ℬ_2,x_2 are said to be similar if ℬ_1,x_1∩ℬ_2,x_2≠∅.
Even if ℬ_1,x_1∩ℬ_2,x_2≠∅ and ℬ_1,x_1∩ℬ_3,x_3≠∅, it does not necessarily follow that ℬ_2,x_2∩ℬ_3,x_3≠∅. From definition <ref>, when two admissible behaviors ℬ_1,x_1 and ℬ_2,x_2 are similar, there always exist some common admissible trajectories w_com∈ℬ_1,x_1∩ℬ_2,x_2 and common behavior ℬ_com involving all common admissible trajectories. Taking into account the fact that the admissible behavior can be decomposed as ℬ_i,x_i=span(H_i)+w_i^0, a data-based criterion for similarity can be readily derived as follows.
Let the sampled I/O data (U_i^Test,Y_i^Test) satisfy the offline test principles (<ref>) and (<ref>), and let H_i be constructed as in (<ref>). Then admissible behaviors ℬ_1,x_1 and ℬ_2,x_2 are similar if and only if there exist vectors l_1∈ℝ^n_uT and l_2∈ℝ^n_uT such that
[ H_1 H_2 ][ l_1; l_2 ]=w_2^0-w_1^0.
Moreover, by solving the above non-homogeneous LAE, the common behavior can be expressed as
ℬ_com={w_com∈ℝ^n_wT|w_com=H_1l_1+w_1^0.}.
A consequence of Lemma <ref> and Definition <ref>.
From Definition <ref> and Lemma <ref>, it is observed that the similarity is a rather loose concept, serving only as a qualitative assessment indicator. However, it can assist to address the Problem P3) in some special cases. To be specific, once the learned trajectory of the guest system satisfies w_g∈ℬ_com, the host system can directly adopt the successful experience w_g to accomplish the same tasks. In this situation, the difference w_g-w_h is equal to zero.
Building upon the definition of similarity, in order to further quantitatively assess the benefits of the successful experience of the guest system on the control of host system, the definition of similarity indexes needs to be proposed. Since the admissible behavior can be decomposed into a sum of subspace and offset components, from a geometric perspective, the principal angles between two subspaces 𝒲_1 and 𝒲_2 can serve as a powerful tool. Based on Definition <ref>, the similarity indexes between two admissible behaviors are defined as follows.
For similar admissible behaviors ℬ_1,x_1 and ℬ_2,x_2, let them be decomposed as (<ref>), the similarity indexes between ℬ_1,x_1 and ℬ_2,x_2, denoted by SI(ℬ_1,x_1,ℬ_2,x_2), refer to the cosine of the principal angles between 𝒲_1 and 𝒲_2, that is,
SI(ℬ_1,x_1,ℬ_2,x_2):=cosΘ(𝒲_1,𝒲_2).
The similarity indexes SI(·,·) are essentially the function with respect to two intersecting affine sets, and SI has the following properties:
P1) SI(ℬ_1,x_1,ℬ_2,x_2)=SI(ℬ_2,x_2,ℬ_1,x_1), ∀ℬ_1,x_1, ℬ_2,x_2;
P2) SI(ℬ_1,x_1,ℬ_1,x_1)-1_n_uT^ T;
P3) Even if SI(ℬ_1,x_1,ℬ_2,x_2)=SI(ℬ_1,x_1,ℬ_3,x_3), it does not necessarily follow that ℬ_2,x_2=ℬ_3,x_3.
By leveraging the offline test principles (<ref>) and (<ref>), the decomposition in (<ref>) can be constructed from sampled data, ensuring that the calculation of similarity indexes is model-independent. Additionally, the similarity indexes between two admissible behaviors are independent on the offset components w_i,off, which can be interpriated through a geometric perspective. Since two similar admissible behaviors are essentially intersecting affine hyperplanes in Euclidean spaces, the offset components only cause translations of the corresponding affine hyperplanes and does not affect their intersection and the principal angles.
Compared to the concept of similarity proposed in Definition <ref>, the similarity indexes are quantitative assessment indicator. Specifically, the similarity indexes geometrically reflect the angle between two affine hyperplanes determined by the admissible behaviors. The similarity indexes closer to 1^ T_n_uT indicates that two admissible behaviors ℬ_1,x_1 and ℬ_2,x_2 are more similar.
Although Definition <ref> presents the concept of the similarity indexes, such a definition is unefficient to calculate the similarity indexes. In order to develop an efficient calculation strategy, we define the SVD of the matrix H_1^ TH_2 as
H_1^ TH_2=UDV^ T
where
D=diag(s_1,s_2,⋯,s_n_uT), s_1≥ s_2≥⋯≥ s_n_uT>0.
Through the designed offline I/O test principles (<ref>) and (<ref>), a data-based strategy can be proposed to efficiently calculate the similarity indexes, which is demonstrated in the following theorem.
For admissible behaviors ℬ_1,x_1 and ℬ_2,x_2, let
* The sampled I/O data (U_i^Test,Y_i^Test) satisfy the offline test principles (<ref>) and (<ref>);
* The matrix H_i be constructed as in (<ref>);
* The SVD of H_1^ TH_2 be given as (<ref>) and (<ref>).
If (<ref>) is solvable, then the similarity indexes between ℬ_1,x_1 and ℬ_2,x_2 can be calculated as
SI(ℬ_1,x_1,ℬ_2,x_2)=[ s_1, s_2, ⋯, s_n_uT ].
Moreover, the principal vectors associated with 𝒲_1 and 𝒲_2 are given by H_1U and H_2V.
Since the LAE (<ref>) is solvable, the admissible behaviors ℬ_1,x_1 and ℬ_2,x_2 are similar, from which the similarity indexes can be further calculated. Following the offline test principles (<ref>) and (<ref>) and the matrix H_i in (<ref>), it hold that ℬ_i,x_i=𝒲_i+w_i^0 and 𝒲_i=span(H_i). According to the definitions of singular values and singular vectors, the k-th biggest singular value of the matrix H_1^ TH_2 can be expressed as
s_k=max_l=v=1 l^ TH_1^ TH_2v=l_k^ TH_1^ TH_2v_k, k∈ℤ_n_uT\{0}
subject to
⟨ l,l_i⟩=⟨ v,v_i⟩=0, i∈ℤ_k-1\{0}
where l_i∈ℝ^n_uT and v_i∈ℝ^n_uT. From the data-based construction of H_i in (<ref>), the matrices H_1 and H_2 are both orthogonal matrices. By introducing the following coordinate transformation
x_i =H_1l_i∈𝒲_1, x=H_1l∈𝒲_1
y_i =H_2v_i∈𝒲_2, y=H_2v∈𝒲_2
, i∈ℤ_k-1\{0}
it is directly concluded that
x =H_1l=l=1,
y =H_2v=v=1
and
⟨ x,x_i⟩ =⟨ H_1l,H_1l_i⟩=⟨ l,l_i⟩=0
⟨ y,y_i⟩ =⟨ H_2v,H_2v_i⟩=⟨ v,v_i⟩=0
, i∈ℤ_k-1\{0}.
Afterward, the k-th biggest singular value s_k defined in (<ref>) can be alternatively represented by
s_k=max_x∈𝒲_1max_y∈𝒲_2⟨ x,y⟩=⟨ x_k,y_k⟩, k∈ℤ_n_uT\{0}
subject to
x=y=1, ⟨ x,x_i⟩=0, ⟨ y,y_i⟩=0, i∈ℤ_k-1\{0}.
From Definition <ref>, the k-th biggest singular value of the matrix H_1^ TH_2 is exactly the cosine of k-th smallest principal angle between the subspaces 𝒲_1 and 𝒲_2, that is,
s_k=cos(θ_k), k∈ℤ_n_uT\{0}.
Therefore, the similarity indexes between ℬ_1,x_1 and ℬ_2,x_2 (or equivalently, the principal angles between subspaces 𝒲_1 and 𝒲_2) can be efficiently obtained through computing the singular values of the matrices H_1^ TH_2, that is,
SI(ℬ_1,x_1,ℬ_2,x_2)=[ s_1, s_2, ⋯, s_n_uT ].
Additionally, of note is that the vectors l_i and v_i in (<ref>) are essentially the i-th column of the orthogonal matrices U and V, that is,
U =[ l_1, l_2, ⋯, l_n_uT ],
V =[ v_1, v_2, ⋯, v_n_uT ].
From Definition <ref>, the principal vectors associated with the subspaces 𝒲_1 and 𝒲_2 can be obtained from
[ x_1, x_2, ⋯, x_n_uT ] =H_1[ l_1, l_2, ⋯, l_n_uT ]=H_1U,
[ y_1, y_2, ⋯, y_n_uT ] =H_2[ v_1, v_2, ⋯, v_n_uT ]=H_2V.
Consequently, the similarity indexes between two admissible behaviors and the principal angles between their associated subspace components can be efficiently calculated through the SVD of H_1^ TH_2.
After presenting the definition of similarity indexes and calculating them from sampled data, we can pay our attention back to Problem P3), which is addressed in Section <ref>.
§ SIMILARITY-BASED LEARNING CONTROL FRAMEWORK
In this section, a similarity-based learning control framework is proposed by leveraging the sampled I/O data to address Problem P3). We suppose that, through some powerful control strategies, the guest system Σ_2,𝕋 has already accomplished its tasks and learned the admissible trajectory w_g. The core idea of the similarity-based learning control framework lies in that when the host system Σ_1,𝕋 is confronted with the unified tasks, the successful experience of the guest system can provide helpful guidance.
Moreover, the benefits of the successful experience of the guest system to the host system can be quantitively assessed via the similarity indexes introduced in Section <ref>,.
Specifically, as we revisit Problem P3), it is evident that the to-be-sought w_h is essentially the orthogonal projection of w_g onto ℬ_1,x_1. Existing learning-based control strategies depend on the model information of Σ_1,𝕋, adjusting the controller parameters through repetitive trial-and-error to ultimately find w_h. With respect to the mechanism of the existing learning-based control strategies, an illustrative example in the 3-dimensional Euclidean space ℝ^3 is depicted in Fig. <ref>. In contrast, the similarity-based learning control framework aims to directly obtain w_h via projection techniques by employing the similarity indexes and the successful experience of the guest system, and the trial-and-error processes are no longer needed. Likewise, an illustrative example is depicted in Fig. <ref>, where w_h can be efficiently calculated by exploiting the similarity indexes cosΦ and w_g, ensuring that w_h-w_g is minimized.
To present the similarity-based learning control framework precisely, the orthogonal projection operator onto the subspace 𝒲_1 is denoted by P_𝒲_1(·):ℝ^n_wT→𝒲_1. Correspondingly, the orthogonal projection operator onto the admissible behavior ℬ_1,x_1 is denoted by P_ℬ_1,x_1(·):ℝ^n_wT→ℬ_1,x_1. That is, for any w_g∈ℝ^n_wT, P_ℬ_1,x_1(w_g) refers to its orthogonal projection onto ℬ_1,x_1 that minimizes the difference w_g-P_ℬ_1,x_1(w_g). Before presenting the similarity-based learning control framework, the following lemma is introduced as the preliminary.
(<cit.>)
Let the sampled I/O data (U_i^Test,Y_i^Test) satisfy the offline test principles (<ref>) and (<ref>), and let H_i be constructed as in (<ref>). Then for all x∈ℝ^n_wT, the orthogonal projection onto ℬ_1,x_1 can be calculated by
P_ℬ_1,x_1(x)=w_1^0+P_span(H_1)(x-w_1^0).
By leveraging the offline sampled data, Lemma <ref> calculates the orthogonal projection onto the admissible behavior via investigating another orthogonal projection onto the associated subspace component. In comparison to existing learning-based control strategies, the main superiority of the proposed similarity-based learning control framework lies in the fact that the term P_span(H_1)(x-w_1^0) can be efficiently obtained by exploiting the similarity indexes and projection techniques, which is demonstrated as follows.
For admissible behaviors ℬ_1,x_1 and ℬ_2,x_2, let
* The sampled I/O data (U_i^Test,Y_i^Test) satisfy the offline test principles (<ref>) and (<ref>);
* The matrix H_i be constructed as in (<ref>);
* There exist l_1 and l_2 such that (<ref>) holds;
* The SVD of H_1^ TH_2 be given as (<ref>) and (<ref>).
For the learned admissible trajectory given by w_g∈ℬ_2,x_2, the optimal admissible trajectory w_h∈ℬ_1,x_1 is calculated as
w_h=H_1UDg+P_span(H_1)(w_2^0-w_1^0)+w_1^0
where g satisfies
w_g=H_2Vg+w_2^0.
In this situation, the difference w_h-w_g is minimized, or equivalently, Problem P3) is addressed.
From conditions (<ref>) and (<ref>), by leveraging the sampled data (U_i^Test,Y_i^Test), the admissible behaviors ℬ_1,x_1 and ℬ_2,x_2 can be decomposed as
ℬ_i,x_i=span(H_i)+w_i^0.
As previously emphasized, the to-be-sought w_h in Problem 3) is essentially the orthogonal projection of w_g onto ℬ_1,x_1, i.e., P_ℬ_1,x_1(w_g). From conditions (<ref>), the admissible behaviors ℬ_1,x_1 and ℬ_2,x_2 are similar, which allows for further calculating the similarity indexes. Additionally, by leveraging Theorem <ref>, the condition (<ref>) ensures that the similarity indexes between ℬ_1,x_1 and ℬ_2,x_2 can be obtained via calculating the singular values of H^ T_1H_2 as
SI(ℬ_1,x_1,ℬ_2,x_2)=[ s_1, s_2, ⋯, s_n_uT ].
Another helpful conclusion brought by the conditions (<ref>) is that the principal vectors associated with the subspaces 𝒲_1 and 𝒲_2 can be calculated as H_1U and H_2V. Consequently, the admissible behaviors can be equivalently expressed as
ℬ_1,x_1 =span(H_1U)+w_1^0,
ℬ_2,x_2 =span(H_2V)+w_2^0.
For any admissible trajectory w_g∈ℬ_2,x_2, there always exists some vector g∈ℝ^n_uT such that (<ref>) holds, based on which we can conclude that
w_h =P_ℬ_1,x_1(w_g)
=P_ℬ_1,x_1(H_2Vg+w_2^0).
However, the operator P_ℬ_1,x_1(·) is not linear, which results in difficulties in calculating this orthogonal projection. From Lemma <ref>, it can be concluded that the orthogonal projection onto ℬ_1,x_1 can be equivalently expressed as
P_ℬ_1,x_1(H_2Vg+w_2^0) =w_1^0+P_span(H_1U)(H_2Vg+w_2^0-w_1^0).
Thanks to the linearity of the operator P_span(H_1U)(·), w_h can be further rewritten as
w_h=w_1^0+P_span(H_1U)(H_2V)g+P_span(H_1U)(w_2^0-w_1^0).
By leveraging the principal vectors H_1U and H_2V, the orthogonal projection in (<ref>) can be efficiently computed. This is also the prominent advantage of the similarity-based learning compared to existing learning-based methods. Let the i-th column of H_1U (or H_2V) be denoted as (H_1U)_i (or (H_2V)_i), then P_span(H_1U)(H_2V) can be expressed as
P_span(H_1U)(H_2V)
= [ P_span(H_1U)(H_2V)_1, ⋯, P_span(H_1U)(H_2V)_n_uT ]
where
P_span(H_1U)(H_2V)_i=∑_j=1^n_uT⟨(H_2V)_i,(H_1U)_j⟩(H_1U)_i
holds for all i∈ℤ_n_uT\{0}. Following Definitions <ref> and <ref> and the properties of SVD, it follows that
s_k =⟨(H_2V)_k,(H_1U)_k⟩, ∀ k∈ℤ_n_uT\{0},
0 =⟨(H_2V)_i,(H_1U)_j⟩, ∀ i≠ j, ∀ i,j∈ℤ_n_uT\{0}.
By leveraging (<ref>), the orthogonal projection P_span(H_1U)(H_2V) can be further expressed as
P_span(H_1U)(H_2V)
= [ P_span(H_1U)(H_2V)_1, ⋯, P_span(H_1U)(H_2V)_n_uT ]
= [ (H_1U)_1s_1, ⋯, (H_1U)_n_uTs_n_uT ]
= H_1UD.
Therefore, the to-be-sought admissible trajectory w_h∈ℬ_1,x_1 can be further expressed as
w_h=w_1^0+H_1UDg+P_span(H_1)(w_2^0-w_1^0).
Since w_h is essentially the orthogonal projection of w_g onto ℬ_1,x_1, the difference w_h-w_g must be minimal.
The similarity-based learning control framework proposed in Theorem <ref> can provide an innovative perspective on learning-based control. When seeking the optimal trajectory w_h∈ℬ_1,x_1, we no longer require to repeatedly execute certain learning-based control strategies for Σ_1,𝕋. Alternatively, we can directly obtain the optimal trajectory w_h leveraging the successful experience of guest systems. From Theorem <ref>, it is not difficult to observe that the closer cosΘ(𝒲_1,𝒲_2) is to 1^ T_n_uT, the smaller the difference ‖ w_h-w_d ‖. Additionally, the similarity indexes between ℬ_1,x_1 and ℬ_2,x_2 can be compensated by interconnecting the host system with another auxiliary system, such that the learning performance can be further enhanced. Based on this consideration, similarity-based learning control can be utilized in some high-similarity scenarios. For example, when there exist small uncertainties in the model parameters, the control of the real system can greatly benefit from the successful experience of the nominal model without having to re-identify the parameters and repeat the learning process.
Theorem 2 minimizes the difference between the learned admissible trajectories of the host and guest systems, denoted as w_h-w_g. It is worth emphasizing that in many applications, we only require certain element of the admissible trajectories of the host and guest systems to be as close as possible. For example, in output tracking tasks, we expect the difference between the admissible outputs, denoted as 𝐲_h-𝐲_c, to be as small as possible. The similarity-based learning framework proposed in Theorem 2 can readily address such problems because all admissible outputs likewise span an affine set, as discussed in Remark <ref>. Therefore, by selecting appropriate affine sets, the similarity-based learning control framework can be applied to a wide range of application scenarios.
Just like humans need to absorb a wide range of learning experiences from others, an increase in the number of successful experiences will improve the control performance of the similarity-based learning. This is because, as the number of guest systems increases, there always exists a guest system whose admissible behavior shares more similarity with that of the host system. By adopting the successful experience of the “most similar” guest system, the similarity-based learning control framework can eventually achieve better learning control performance.
Building upon the previously proposed results, the process of similarity-based learning control framework can be summarized in Algorithm <ref>.
§ SIMULATION EXAMPLES
For the illustration of the proposed similarity-based learning control frameworks, simulation examples are presented in this section. We provide a numerical example and simulation tests on the mobile robots simultaneously.
Consider two heteronogeous discrete-time linear systems in the form of (<ref>), and their model matrices are given as follows:
A_1(t) =[ 0.05t 1 0; 0 0.05t 1; -0.09 -0.60 -1.40+0.05t ],
A_2(t) =[ 0.05t 1 0; 0 0.05t 1; -0.08 -0.66 -1.5+0.05t ],
B_1(t) =B_2(t)=[ 6; 0; 0.50 ], C_1(t)=C_2(t)=[ 2; 1; 0 ]^ T,
D_1(t) =D_2(t)=0, x_1(0)=[ 0; 0; 1.02 ], x_2(0)=[ 0; 0; 1 ].
Here, we present the model knowledge solely for clear illustration of the simulation settings, and it will not be utilized for the design and analysis. Of note is that this type of difference often arises in scenarios where there exist uncertainties between the host system Σ_1,𝕋 and guest system Σ_2,𝕋. Let the host and guest systems be given a consistent output tracking task over the time duration ℤ_34, with the reference output set as
y_d(t)=e^-0.1tsin(π/5t), ∀ t∈ℤ_34.
Owing to the absence of model knowledge, offline I/O tests are needed to collect a sufficient number of admissible trajectories. The offline I/O tests need to be executed for at least 36 times, and the test inputs are designed as
𝐮_i^0=0_35, [ 𝐮_i^1, 𝐮_i^2, ⋯,𝐮_i^35 ]=I_35.
With the designed test inputs, the proposed offline test principles (<ref>) and (<ref>) are satisfied, then the data-based representation (<ref>) can be constructed, and the admissible behaviors ℬ_1,x_1 and ℬ_2,x_2 can be decomposed by leveraging (<ref>).
To address the output tracking problem of the guest system Σ_2,𝕋, iterative learning control (ILC) that is a learning-based strategy can serve as a powerful tool. After 300 iterations of the algorithm, the tracking problem of the guest system Σ_2,𝕋 is perfectly addressed. The output and input of the guest system, denoted as y_g(t)-ILC and u_g(t)-ILC, respectively, are depicted in Fig. <ref>.
With the obtained admissible trajectory w_g∈ℬ_2,x_2, the tracking issue of the host system no longer needs to resort to ILC, which depends on repetitive learning and trial-and-error. By exploiting the proposed similarity-based learning control framework, the required w_h∈ℬ_1,x_1 can be obtained through Theorem <ref>. The learned output and input of the host system, denoted by y_h(t)-SBL and u_h(t)-SBL, respectively, are depicted in Fig. <ref>. From Fig. <ref>, it can be observed that the similarity-based learning control framework achieves satisfied learning performance, and the tracking issue of the host system can be directly addressed.
As a comparison, another guest system, denoted by Σ_3,𝕋, that is less similar with the host system Σ_1,𝕋 is also provided. The model knowledge of Σ_3,𝕋 is presented as follows:
A_3(t) =[ 0.05t 1 0; 0 0.05t 1; -0.20 -0.20 -1.3+0.05t ],
B_3(t) =[ 6; 0; 0.50 ], C_3(t)=[ 2; 1; 0 ]^ T, D_3(t)=0.
The initial state of Σ_3,𝕋 is set as x_3(0)=[ 0.2, 0, 1 ]^ T, and the tracking task for the reference output y_d(t) are taken into account again. By applying ILC to Σ_3,𝕋, the tracking issue of Σ_3,𝕋 can be addressed. The output and input of Σ_3,𝕋, denoted as y'_g(t)-ILC and u'_g(t)-ILC, respectively, are depicted in Fig. <ref>. After obatining the admissible trajectory w_3,x_3∈ℬ_3,x_3, the tracking problem of the host system can be directly addressed through the similarity-based learning control framework. The learned output and input of the system Σ_1,𝕋, denoted as y'_h(t)-SBL anf u'_h(t)-SBL, respectively, are depicted in Fig. <ref>. Since the guest system Σ_3,𝕋 is less similar with the host system Σ_1,𝕋, the performance brought by the similarity-based learning control is degraded.
Consider the a class of mobile robots equipped with two independent driving wheels (<cit.>), whose physical models are illustrated in Fig. <ref>.
The symbols v, ϕ, u_r, and u_l represents the velocity, azimuth, right driving input, and left driving input of the mobile robot, repectively. Let the state, input, and output of the robots be defined as x=[ v, ϕ, ϕ̇ ]^ T,
u=[ u_r, u_l ]^ T,
and y=[ v, ϕ ]^ T, respectively, and let the sampling time be T_s=0.05s. Through the discretization and linearization techniques, the dynamics of the mobile robots can be described by the state space representation as
R_i:{
x_i(n+1) =A_ix_i(n)+B_iu_i(n)
y_i(n) =C_ix_i(n)
. .
The symbol n∈ℤ_+ refers to the sampling points, thus the time interval between two adjacent sampling points is T_s.
For the host robot R_1 and guest robot R_2 whose dynamics are represented by (<ref>), their model parameters are given as
A_1 =[ 1.0100 0 0; 0 1 0.0520; 0 0 1.0100 ], A_2=[ 0.9975 0 0; 0 1 0.0499; 0 0 0.9955 ],
B_1 =[ 0.0130 0.0130; -0.0025 -0.0050; -0.0850 -0.1700 ], B_2=[ 0.0125 0.0125; -0.0021 -0.0042; -0.0833 -0.1666 ],
C_1 =C_2=[ 1 0; 0 1; 0 0 ], x_1(0)=[ 3; 0; 0 ], x_2(0)=[ 3.02; 0; 1 ].
We provide the model parameters solely to illustrate the simulation settings clearly. In the scenarios where the model information is not available, we can still obtain the data-based representation for the admissible behavior by designing appropriate offline test principles, as discussed in Lemma <ref>.
Two mobile robots are assigned the same task, which is to move along a preplanned circular path within the time duration 𝕋=[0,4]s. The circular path is specified by the velocity and azimuth references of the mobile robots. Specifically, within the time duration 𝕋=[0,4]s, the reference trajectories for velocity and azimuth are defined as
y_d,v(n) =3(m/s), ∀ n∈ℤ_79,
y_d,ϕ(n) ={ 0 (rad), n∈ℤ_10
-0.6875(n-11) (rad), n∈ℤ_79\ℤ_10. .
Therefore, the equivalent objective is to track the reference y_d=[ y_d,v(n), y_d,ϕ(n) ]^ T over the specific time duration 𝕋. For the guest mobile robot, ILC can efficiently address the tracking problems. After 200 iterations, the tracking performances brought by ILC are shown in Fig. <ref>, where the learned velocity and azimuth are denoted as v_g(n)-ILC and ϕ_g(n)-ILC, respectively.
As a result, the guest mobile robots gradually reaches the predefined circular trajectory. The learning process of the guest mobile robots is shown in Fig. <ref>.
After 200 iterations of the algorithm, it can be observed that the mobile robot is able to proceed along the predefined circular trajectory.
For the host mobile robot, it no longer relies on repetitive trial-and-error processes, but rather directly utilizes the successful experiences of the guest system to complete the control task. By leveraging the proposed similarity-based learning control framework, the learned velocity and azimuth of host mobile robot, denoted by v_h(n)-SBL and ϕ_h(n)-SBL, respectively, are depicted in Fig. <ref>. Consequently, the learned path of the host mobile robot is shown in Fig. <ref>. From Fig. 9, it can be concluded that the host mobile robot can move along the preplanned circular path by leveraging the successful experience of the guest mobile robots, and the effectiveness of the proposed similarity-based learning control framework is verified.
§ CONCLUSIONS
In this paper, we have innovatively proposed the definitions of similarity and similarity indexes between admissible behaviors, based on which a similarity-based learning control framework has been further developed. Owing to the absence of model knowledge, appropriate offline I/O test principles have been designed, based on which the admissible behaviors of LTV systems have been reconstructed from sampled data. By exploiting the sampled data, a data-based criterion for verifying the similarity and a data-based strategy for calculating the similarity indexes have been developed. Building upon the calculated similarity indexes and projection techniques, a similarity-based learning control framework has been developed by exploiting the sampled data. Consequently, the host system has accomplished the same tasks by leveraging the successful experience of the guest system, without repeatedly resorting to any existing learning-based control strategies.
plainnat
|
http://arxiv.org/abs/2409.02730v1 | 20240904140308 | Complete and Efficient Covariants for 3D Point Configurations with Application to Learning Molecular Quantum Properties | [
"Hartmut Maennel",
"Oliver T. Unke",
"Klaus-Robert Müller"
] | cs.LG | [
"cs.LG",
"physics.chem-ph"
] |
Reply to Comment on “A slightly oblate dark matter halo revealed by a retrograde precessing Galactic disk warp"
Haibo Yuan
September 9, 2024
===============================================================================================================
§ ABSTRACT
When modeling physical properties of molecules with machine learning, it is desirable to
incorporate SO(3)-covariance. While such models based on low body order features are not complete, we formulate and prove general completeness properties for higher order methods, and show that 6k-5 of these features are enough for up to k atoms. We also find that the Clebsch–Gordan operations commonly used in these methods can be replaced by matrix multiplications without sacrificing completeness, lowering the scaling from O(l^6) to O(l^3) in the degree of the features. We apply this to quantum chemistry, but the proposed methods are generally applicable for problems involving 3D point configurations.
§ INTRODUCTION
Atomistic simulations have proven indispensable for advancing chemistry and materials science, providing insights into the behavior of matter at the atomic level. In the past, these simulations have been computationally demanding, but the advent of Density Functional Theory (DFT) <cit.> significantly enhanced the accessibility of atomistic simulations, and recent breakthroughs in machine learning (ML) have further accelerated progress <cit.>. ML methods trained on ab initio data now enable the fast and accurate prediction of quantum properties orders of magnitude faster than traditional calculations <cit.>.
A cornerstone of these methods, whether utilizing kernel-based approaches <cit.> or deep learning <cit.>, lies in the effective representation of molecules <cit.> or materials <cit.> through carefully chosen features or descriptors. Early examples include the Coulomb Matrix representation <cit.> and SOAP <cit.>, while recent advancements extend this principle beyond rotationally invariant representations with the design of equivariant model architectures <cit.>.
However, <cit.> pointed out that commonly available descriptors are not able to uniquely identify some molecular structures <cit.>. This can lead to ambiguities (two distinct structures may be mapped to the same descriptor) that hamper the performance of ML models. Effectively, a lack of uniqueness is similar to introducing a high level of noise into the learning process and may hinder generalization.
A second important shortcoming of some modern ML architectures was discussed by <cit.> and only becomes visible when running molecular dynamics (MD) simulations <cit.>. It was observed that ML models with excellent prediction accuracy for energies and forces can nevertheless show unphysical instabilities (e.g. spurious bond dissociation) when simulating longer MD trajectories — limiting their usefulness in practice. Equivariant architectures, however, as broad anecdotal evidence and some theoretical analyses have shown <cit.>, were found to enable stable MD simulations over long timescales <cit.>.
Both aspects lead to the interesting theoretical question of how to construct a provably unique invariant, or more generally, a “complete” (to be defined below) equivariant and computationally efficient representation of descriptors for atomistic simulations.
We will study this challenge both by theoretical means and by performing empirical atomistic simulations.
Let us assume that the origin of our coordinate system was fixed meaningfully and we are looking for unique
descriptors of point sets that are equivariant under rotations in SO(3).
To get invariant features, we can use a rotationally invariant function of n points (e.g. distance from the origin for n=1, or angles between two points for n=2), and then sum over all n–tuples of points in the configuration.
Such descriptors are called “(n+1)–body functions”. It was recently shown that
descriptors based on 2- and 3-body information (distances and angles) are unable to distinguish some non-equivalent environments <cit.>. Even 4-body information (dihedrals) is not sufficient in all cases
(see Fig. <ref>B) and it is necessary to include
higher m-body information for some structures. Other methods that construct descriptors implicitly, e.g. by message-passing <cit.>, suffer from similar problems <cit.>.
§ RESULTS AND DISCUSSION
Let us start defining an appropriate mathematical language. In applications to
chemistry, the points in the point set can belong to different atom types/elements which have to be treated
differently. We assume there is a fixed finite set of “colors” (the atom types/elements), and each point in the
point set S is assigned a color in , i.e. S = ⋃_γ∈ S_γ.
We propose to take as potential features all polynomial point set descriptors (PPSDs), i.e. all scalar expressions that can be written down for colored point sets, using the coordinates of points, constants from , addition, multiplication, and summations over all points of a given color, such that these expressions can be evaluated for any point set independent of the number of points (See Appendix for formal definitions).
In practice, a variant of PPSDs is more useful, using polynomials only for the angular part (i.e. as a function on the sphere ^2) and some other function space for the radial part. With the assumptions that
these radial functions are analytic and allow approximation of continuous functions in the radius, we can
(with some extra effort) prove almost the same theorems, see Appendix for the definitions, and later sections for details and proofs.
General theorems:
We now describe informally a series of mathematical theorems about PPSDs that we prove in this work,
see respective Appendices
for the precise formulations and proofs.
We first observe (see Appendix ) that the computation of any scalar PPSD
can be arranged into two steps:
* Evaluate expressions involving only one summation sign: ∑_ r∈ S_γ P( r) for some color γ and polynomial P:^3 acting on point coordinates r.
We call them fundamental features.
* Evaluate polynomials in fundamental features.
This separability into two steps allows any PPSD to be evaluated in time O(n) where n is the number of points (here atoms), which is a major advantage over e.g. descriptors based on rational functions, for which this is generally not possible.
We call a PPSD that can be written such that all polynomials in fundamental features have degree d “homogeneous
of order d” [Note that this is only the degree of the polynomial in fundamental features (step 2), it does not take into account the degrees of the polynomials used to construct the fundamental
features themselves. When we multiply out and move all summations
to the left (see Appendix ) this order corresponds to the depth of the summations, since each fundamental feature comes with one summation sign].
The order of such a PPSD is unique, for a proof and a refinement of this notion see Appendix
. PPSDs of order d are also said to be of “body order d+1” (this
convention includes one atom at the origin of the coordinate system in the count).
In this language, there are infinitely many independent SO(3)–invariant PPSDs of body order 3, but the examples in <cit.> show that there are inequivalent configurations that cannot be distinguished by invariant functions of body orders ≤ 4 (see Fig. <ref>B).
Our Topological Completeness Theorem (Theorem in Appendix ) says that this problem vanishes
when we allow arbitrary body orders, even when we restrict the functions to be polynomial invariants: Any two SO(3)–inequivalent configurations can be distinguished by SO(3)–invariant PPSDs, i.e. taking the values of all polynomial SO(3)–invariant functions gives a unique descriptor.
In general for covariant functions the values of PPSDs change when we rotate a configuration, so this uniqueness property has to be expressed differently: We prove that there are enough SO(3)–covariant PPSDs to approximate any continuous SO(3)–covariant function of colored point sets.
Without bound on the number of points in the configurations it is of course necessary to use infinitely many
independent invariant functions to distinguish all SO(3)–inequivalent configurations, as these form an infinite dimensional space. However, we can ask how many features are necessary to uniquely identify configurations of up to k points. Our Finiteness Theorem (Theorem in Appendix ) gives a linear upper bound of 6k-5, with some guarantees for the distance of non–equivalent configurations. Its proof is based on real algebraic geometry and subanalytic geometry.
Practical construction:
We will now show how to produce unique features in such a way that we
never leave the space of covariant features:
Let l be the irreducible (2l+1)–dimensional (real) representation
of SO(3), and Y_l:^3l be a SO(3)–covariant polynomial (which is unique on the sphere ^2 up to a scalar constant factor, see Appendix ).
These Y_l are given by (real valued) spherical harmonics of degree l.
We now proceed again in two stages:
* Evaluate spherical harmonics [To be precise, we multiply the spherical harmonics with radial basis functions. We use exponent 2k here to have only
polynomial functions. In practice, we would rather use different, decaying functions,
this is treated as “case ii” in general as one of the variations, see Appendix
.]
:
∑_ r∈ S_γ | r|^2k Y_l( r)
for all colors γ and k=0,1,2,... and l=0,1,2,...
These are covariant fundamental features (i.e. of order 1) with values in l.
* Iterate for d=1,2,...: Compute Clebsch–Gordan operations [These project
the tensor product of two representations to an irreducible component, see e.g. <cit.>.]
^l_1⊗^l_2^l_3 for |l_1-l_2| ≤ l_3 ≤ l_1+l_2, where the feature in ^l_1 is a fundamental feature, and the feature in ^l_2 is of order d. This gives covariant features of order d+1.
Clearly, this construction appears to be somewhat special, so we may ask whether it actually gives “enough” invariants (i.e. achieves completeness).
This is in fact true in a very strong sense: Our Algebraic Completeness Theorem (Theorem ) says that all invariant / covariant
PPSDs can be obtained as a linear combination of them; in a sense this is just the isotypical decomposition of
the space of all PPSDs (see Appendix ).
While the above strategy to construct invariant / covariant functions
has been used e.g. in <cit.>, our novel completeness theorems
show that this avoids the potential incompleteness problem pointed out in
<cit.>. In fact, by our algebraic completeness theorem we get all polynomial covariant functions,
and by the topological completeness theorem those are sufficient to approximate any continuous covariant function. We also get an algebraic completeness theorem for features constructed from
tensor products and contractions as in <cit.>, see
Theorem ; this is based on classical invariant theory.
Computational bottleneck:
We now turn to a particular efficient variant of our construction.
Since invariant PPSDs of order <4 are
not sufficient for distinguishing all SO(3)–equivalence classes, we need to construct covariants of higher body orders, i.e. in the above procedure we need to use the Clebsch–Gordan
products. Note that their computational cost
is independent of the number of points, and is linear in the number of products, but
scales as O(l^6) when we take the tensor product of two representations of the form
0⊕...⊕l. But with unrestricted number of points in our configuration, we cannot bound the l, even if we are just considering
configurations on ^2⊂^3: Using Y_l:^2l only for l=0,1,...,L-1 yields a ||· L^2–dimensional vector space of fundamental features ∑_ r∈ S_γ Y_l( r)
(and all PPSDs are polynomials in the fundamental features). So this could only describe a configuration space of
a dimension ≤ ||· L^2, not the ∞–dimensional space of configurations on ^2
with an unbounded number of points.
Consequently, the bottleneck for a larger number of points (necessitating using larger l for constructing the fundamental features) can be determined as the O(l^6) Clebsch–Gordan operation.
We will now propose how to construct a subset of local descriptors that alternatively to
Clebsch–Gordan operations relies only on
matrix-matrix multiplication. This procedure scales as only O(l^3) for
bilinear operations on two representations of the form
0⊕...⊕l.
A similar speedup was published recently in <cit.>, replacing the Clebsch–Gordan operation
by the multiplication of functions. However, since multiplying functions (instead of matrices) is commutative, this does not reproduce the anti–commutative part of the Clebsch–Gordan operations. Therefore
the construction in <cit.> does not have the full expressivity desired and would not satisfy our Algebraic Completeness Theorem or Theorem <ref> below. In particular, since commutative products cannot produce
pseudo–tensors, its invariants could not distinguish configurations from their mirror images.
Matrix Construction:
Our key idea for removing the computational bottleneck is to apply the Clebsch–Gordan relation
a⊗b≃|a-b|⊕|a-b|+1⊕... ⊕a+b
“backwards” to efficiently encode a collection of features in
^(|a-b|)⊕ ... ⊕^(a+b) as a (2a+1)× (2b+1) matrix in
Lin(a,b)
≃a^* ⊗b≃a⊗b
(see Appendix ) and then the matrix multiplication is a covariant map of representations
Lin(a,b) × Lin(b,c) Lin(a,c).
With Schur's Lemma one can show that it can be expressed as a linear combination of Clebsch–Gordan operations, so unless some coefficients are zero, we can expect this operation to be as useful as the Clebsch–Gordan operations for constructing covariant features
of higher body order. This is indeed the case and to formulate the
corresponding theorem, we define the involved features:
Let ι_a,b,l: l Mat_2b+1,2a+1 be the embedding given by (<ref>)
and define the “matrix moments”
M_a,b,l() := ι_a,b,l∑_ r ∈ S_ Y_l( r)
which are (2b+1)×(2a+1) matrices
(see Appendix for some examples for explicit formulas).
Then the result of the multiplication
M_a_m-1, a_m, l_m(γ_m)· ...
· M_a_1, a_2, l_2(γ_2) · M_0,a_1, l_1(γ_1)
23mm
< g r a p h i c s >
with l_1=a_1 and
|a_1-a_2| ≤ l_2 ≤ a_1+a_2,...,|a_m-1-a_m| ≤ l_m ≤ a_m-1+a_m
are covariant a_m× 1 matrices, i.e. vectors in a_m, given by polynomials of degree l_1+...+l_m, and computing them takes O(m· a^3) steps for an upper bound a ≥ a_i.
.
Any SO(3)–covariant feature with values in a l can be written as a linear combination
of the SO(3)–covariants (<ref>) with a_m=l.
For O(3)–covariants it is enough to use those features given by (<ref>)
with the appropriate parity of l_1+...+l_m.
For the proof see Appendix .
Learning a linear combination:
While computing one given invariant of the form (<ref>) would not be more efficient than
with Clebsch–Gordan operations (as it would waste whole matrices for encoding only one feature), for applications in
Machine Learning we always compute with linear combinations of features (with learnable coefficients), and both the
Clebsch–Gordan operation and Matrix Multiplication define maps
(0⊕...⊕l) ⊗(0⊕...⊕l) 0⊕...⊕l
which are used to build up different linear combinations of covariants of higher body
order. In the Clebsch–Gordan case we also can add to the learnable coefficients of the input features
further learnable parameters that give different weights to the individual parts
l_1⊗l_2l
that contribute to the same l in the output, whereas in the Matrix Multiplication case these
mixture coefficients are fixed (but depend on the shape of the matrices involved).
However, Theorem <ref> shows that using different shapes of matrices is already sufficient to generate all possible covariants, so both methods can
in principle learn the same functions.
Matrix of Matrices construction for efficiency: For practical applications it is important how to organize the matrix multiplications
efficiently. In particular when using GPUs / TPUs with hardware support
for matrix multiplication, it is much more favorable to compute with a few large matrices
than with many small matrices.
Therefore we will use linear combinations of ι_a,b,l for l=|a-b|,...,a+b
to fill a (2b+1) × (2a+1) matrix, and pack r × r small matrices for a,b in
{l_1, l_2,...,l_r} into a large square matrix of side length (2l_1 +1) + ... + (2l_r+1).
Then k-1 such matrices are multiplied to get a matrix built out of covariants
of body order k.
< g r a p h i c s >
This matrix can then be applied to n_1 column vectors
from l_1⊕l_2⊕ ... ⊕l_r
to get covariant vectors of body order k+1.
8mm
< g r a p h i c s >
If the end result should be scalars, we can take scalar
products of the n_1 column vectors in l with n_2 new
covariants in l to obtain
r· n_1· n_2 invariants of body order k+2;
also the traces of the
square submatrices of the matrix product give invariants of body order k (marked in
color in the above example diagram).
Full architectures:
The proposed matrix products approach can be readily used to replace Clebsch–Gordan operations across all possible learning architectures giving rise to significant efficiency gains.
As a proof of concept, in the following experiments we will focus on the simplest such architecture which only
computes a linear combination of many such invariants, see Appendix for code and more details (e.g. in
practice we may want to shift the matrices by the identity to obtain a similar effect to skip connections in ResNets.)
Extensions of this minimal architecture could use a deep neural network instead of a linear combination
of invariants, or can use nonlinear activation functions to modify the matrices obtained in intermediate
steps. In architectures using several layers of Clebsch–Gordan operations, such activation
functions are restricted to functions of the scalar channel, since “you cannot apply a
transcendental function to a vector”. Maybe surprisingly, in our matrix formulation this actually
becomes possible: Applying any analytic function to our (2l+1)×(2l+1)
matrices (not element wise, but e.g. implemented as Taylor series for matrices) is also a covariant operation! Notably, Matrix exponentiation has been suggested as an efficient
and useful operation in Neural Networks in <cit.>.
§ EXPERIMENTAL RESULTS
Our methods yield complete representations and can thus indeed distinguish (molecular) configurations that require higher order features (see <cit.>). This is demonstrated experimentally in Fig. <ref>B/D.
In Fig. <ref>C we used the library E3x (<cit.>), which allows switching between full tensor layers using the
Clebsch–Gordan operation and “Fused Tensor Layers” for which we implemented matrix multiplication instead of the Clebsch–Gordan operations. The plot shows the inference
run time measured on CPUs for computing a function defined by two Tensor layers, depending
on the the setting of “max degree” and whether full or fused layers were used.
In another synthetic experiment, we learn an invariant
polynomial of degree 10 with
Clebsch–Gordan operations and with our matrix multiplication framework, and plot the training
curves averaged over 10 data sets.
< g r a p h i c s >
-2mm
< g r a p h i c s >
On CPUs, almost all the time is spent in the Clebsch–Gordan operations, and replacing them
by the matrix multiplication method makes the training faster by a factor over 100.
When using GPUs, the speedup is not quite as dramatic, but still a factor of 8.4 on V100. (Details in Appendix )
As a first demonstration of our framework for atomistic simulations we show that with the simple architecture that linearly combines the resulting polynomial invariants, we
can learn forces with local features alone that, interestingly, can match the accuracy of other more complex methods which use
several message passing / self attention steps with nonlinear networks (So3krates, <cit.>) or global kernel methods (sGDML, <cit.>). Specifically, our experiments (see Table <ref>) show that accuracies align, notably, independent of the molecule sizes, see Appendix for details.
Since our model is just a linear combination of features of known body order and L,
in future studies, one could use such models to investigate body order expansions or study the influence of larger Ls in detail.
Appropriately representing chemical structure and atomic environments in molecules and materials is an important prerequisite for accurate machine learning models in chemistry.
Ideal descriptors are unique, computationally efficient, and covariant.
In this work we have established an algebraic framework that enables a practical construction of provably complete system(s) of features with these desired properties that holds for any 3D point configurations. Apart from the abstract theoretical contribution of this work, we show that our construction can be readily implemented as matrix-matrix multiplication – reducing computational complexity from O(l^6) to O(l^3) compared to Clebsch-Gordan operations. This yields large efficiency gains while maintaining the performance level of standard machine learning models for atomistic simulation.
In summary, our theoretically well founded unique, covariant, and efficient
descriptors provide a versatile basis for future atomistic modeling and potentially other applications of machine learning on point configurations.
§ ACKNOWLEDGEMENT
The authors acknowledge valuable discussions with Romuald Elie and Zhengdao Chen.
Correspondence to HM ([email protected]) and KRM ([email protected]).
|
http://arxiv.org/abs/2409.03436v1 | 20240905113703 | Fundamentals of Energy-Efficient Wireless Links: Optimal Ratios and Scaling Behaviors | [
"Anders Enqvist",
"Özlem Tuğfe Demir",
"Cicek Cavdar",
"Emil Björnson"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Fundamentals of Energy-Efficient Wireless Links: Optimal Ratios and Scaling Behaviors
This work was supported by the FFL18-0277 grant from the Swedish Foundation for Strategic Research.
Anders Enqvist^*, Özlem Tuğfe Demir^†, Cicek Cavdar^*, Emil Björnson^*
^*Department of Computer Science, KTH Royal Institute of Technology, Kista, Sweden
^†Department of Electrical-Electronics Engineering, TOBB University of Economics and Technology, Ankara, Türkiye
Email: [email protected], [email protected], [email protected], [email protected]
September 9, 2024
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
In this paper, we examine the energy efficiency (EE) of a base station (BS) with multiple antennas. We use a state-of-the-art power consumption model, taking into account the passive and active parts of the transceiver circuitry, including the effects of radiated power, signal processing, and passive consumption. The paper treats the transmit power, bandwidth, and number of antennas as the optimization variables. We provide novel closed-form solutions for the optimal ratios of power per unit bandwidth and power per transmit antenna. We present a novel algorithm that jointly optimizes these variables to achieve maximum EE, while fulfilling constraints on the variable ranges. We also discover a new relationship between the radiated power and the passive transceiver power consumption. We provide analytical insight into whether using maximum power or bandwidth is optimal and how many antennas a BS should utilize.
Energy efficiency, optimization, 6G, multiple antenna communications.
§ INTRODUCTION
The exponential growth in data rates within wireless communication systems, as dictated by Cooper's law, has substantially raised energy consumption. Anticipating continued exponential growth in traffic demands <cit.>, it is imperative to enhance the energy efficiency (EE) of wireless communication technologies. It is often defined as the data rate divided by the related power consumption. By unraveling the fundamental behaviors and limitations of EE in wireless communication systems, we can uncover innovative approaches and technologies that promise more sustainable and efficient wireless networks in the future. While theoretical tools for EE optimization have been developed for decades <cit.>, the willingness to make improvements in practice has garnered heightened attention in recent times, as evidenced by ITU targets <cit.>. Much of the technology development has focused on increasing data rates through the introduction of massive MIMO (multiple-input multiple-output) <cit.> and increasing bandwidth in mmWave and terahertz bands <cit.>.
Additionally, emerging technologies such as Reconfigurable Intelligent Surfaces (RIS) <cit.> promise reductions in energy consumption in future networks. A recent survey on how these and other techniques can lead to power consumption reductions can be found in <cit.>.
§.§ Prior Work and Motivations
EE optimization was pioneered in <cit.>, which emphasized a strong tradeoff between EE and rates.
The paper <cit.> is an early work on how much power is needed to run a wireless communication system, which is much more than just transmit power. The papers <cit.> present a network model that considers the optimization of the area power consumption (PC) and area EE under rate constraints.
A PC model with parameters that capture the fundamental behaviors of base stations (BSs) was presented in <cit.>. It has since been extended to include PC due to carrier aggregation in <cit.>.
In <cit.>, machine learning (ML) was employed for curve fitting real-world data to adapt a PC model capturing essential aspects. These works showcase the power of ML in wireless communication modeling and demonstrate that an analytical linear model, with appropriately chosen constants, can accurately represent the power consumption in current BSs.
While prior works have examined different EE optimization problems in various complex wireless networks, this paper returns to the fundamentals: we consider a single communication link between a multi-antenna BS and a single-antenna user equipment (UE).
When one truly seeks the EE-optimal communication system design, the PC model has a huge impact on the solution. If one only accounts for the transmit power, the optimum is achieved as the rate approaches zero <cit.>. By contrast, <cit.> studied the upper EE limit with a more detailed PC model and demonstrated that very different operating points might be approached in the distant future.
In our new analysis, we uncover an intriguing new relationship between transmit power and transceiver fixed power at EE-optimal solution, which contributes to the fundamental understanding of how to build energy-efficient wireless systems. We discover new analytical relations between the transmit power, bandwidth, and number of antennas, and how these are related to the hardware characteristics. We also develop an algorithm for jointly optimizing these parameters.
§.§ Contributions
In this paper we aim to answer the research questions:
* How should the transmit power, bandwidth, and number of antennas be jointly configured to maximize EE?
* Are there any tangible relationships between these parameters at the optimal solution?
One significant departure from previous papers is our emphasis on analytical scaling behaviors. While prior research primarily focused on optimizing individual parameters, our work extends these models to explore the global optimum.
§ SYSTEM MODEL
We analyze and optimize the energy efficiency of the link between a BS using M antennas and a single-antenna UE. The carrier bandwidth is B and the channel is represented by 𝐡∈ℂ^M, where the squared magnitude of each entry is β. Hence, 𝐡^2=Mβ.
This is a typical model for a line-of-sight channel. The received downlink signal y is given by
y=𝐡^T𝐩x+n,
where x is the data signal, n∼𝒩_ℂ(0,B N_0 ) is the independent receiver noise, and 𝐩∈ℂ^M is the unit-norm precoding vector. Assuming the BS has perfect channel state information (CSI), the capacity of this multiple-input-single-output (MISO) channel is achieved by x ∼𝒩_ℂ(0,P), where P is the transmit power budget, and the precoding vector 𝐩=𝐡^*/𝐡. The data rate given by the capacity is <cit.>
C=B log_2 ( 1 + M P β/B N_0).
Notice that we have made modeling assumptions that enable exact mathematical analysis. However, the qualitative insights also hold for other types of channel realizations such as Rayleigh fading with a variance of β.
§.§ Power Consumption
The power consumption (PC) is modeled as in <cit.>, for single-layer transmission in a single band to a single-antenna UE. The total PC at the BS is
PC= P/κ+P_FIX+P_SYN+D_0 M + D_1 M+η C,
where κ∈ (0,1] is the power amplifier (PA) efficiency and P_FIX is the load-independent power consumption required for cooling, control signaling, backhaul infrastructure, and baseband processors. P_SYN is the load-independent power consumed by the local oscillator. D_0 is the power consumed by each transceiver chain (antenna port) of the BS (e.g., converters, mixer, filters, etc.). D_1 is the power consumed by the signal processing at the BS that scales with the number of antennas, including channel estimation and precoding. η regulates the power consumed by the signal coding at the BS and
the backhaul signaling, both of which is proportional to the capacity C. To simplify the notation and expose the optimization variables, we rewrite (<ref>) as
PC=P/κ + μ+( D_0+ν B) M + η B log_2 ( 1 + M P β/B N_0),
where μ=P_FIX+P_SYN denotes the fixed circuit power consumption from circuitry and synchronization and ν=D_1/B is introduced to highlight that the signal processing is carried out on the sampling rate (which is proportional to the bandwidth).
§.§ Energy Efficiency
In this paper, we focus our efforts on optimizing energy efficiency (EE) <cit.>, defined as the amount of data transferred per unit energy (measured in bit/Joule and equivalently bit/s/Watt). Dividing the channel capacity in (<ref>) by the power consumption in (<ref>), we can define the EE as
EE = B log_2 ( 1 + M P β/B N_0) /P/κ + μ + ( D_0+ν B )M + η B log_2 ( 1 + M P β/B N_0)
In the following sections, we will study the scaling behaviors of the EE with the bandwidth B, power P, and number of antennas M. In particular, we will derive the optimal pairwise ratios of these three design parameters, and then design an algorithm that finds the global optimum.
§ EE-OPTIMAL PARAMETER RATIOS
In this section, we will prove that the optimization variables B, P, and M tend to satisfy specific ratios at the EE-optimal system operation. These results serve as design guidelines.
§.§ Power per Bandwidth
By dividing the numerator and denominator of (<ref>) by B, the EE can be rewritten as
EE=log_2 ( 1 + M P β/B N_0) /P/(κ B) + μ/B + D_0 M/B+ν M + ηlog_2 ( 1 +M P β/B N_0) .
It is apparent from (<ref>) that power and bandwidth mostly appear as a ratio P/B, which is the power spectral density. The terms D_0 M /B and μ/B are the only ones that do not fit this structure. However, in practical situations in which enough bandwidth is available, these terms are expected to be negligible compared to the terms that depend on both the bandwidth and power. This leads to the following result:
When the term μ/B + D_0 M/B is negligible, the EE in (<ref>) is maximized when P and B satisfy the ratio
P/B = N_0 e^u-1/M β,
where
u= W (κ M^2 βν/N_0 e -1/e)+1
and W (·) denotes the Lambert W function, defined by the equation x = W(x)e^W(x) for any x ∈ℂ.
By defining z= P/B, and letting μ,D_0 → 0, (<ref>) can be expressed as
log_2 ( 1 + M β/N_0 z ) /z/κ + ν M + ηlog_2 ( 1 + M β/ N_0 z ) .
The maximum in (<ref>) is obtained by utilizing <cit.>.
By inserting (<ref>) into (<ref>), an upper bound on the maximum EE is obtained as a function of M as
EE_max(M) = u log_2(e)/N_0e^u-1/κ M β+ν M + η u log_2(e) ,
where the effects of μ and D_0 have been neglected.
We have EE_max(M)>0 for M>0 and lim_M→ 0EE_max(M)=lim_M→∞EE_max =0. This means that there exists an optimal finite value M_opt that maximizes (<ref>). This is illustrated in Fig. <ref> with the simulation parameters being given in Table <ref>. We can see that the optimum EE (encircled) is obtained at M=2 for β=-100dB, M=6 for β=-110dB, and M=20 for β=-120dB. This indicates that the optimal M grows as β becomes smaller. For our set of constants, M was increased by a factor 10 when β decreased -20dB. We obtain the optimal power spectral density: P/B=19mW/GHz for β=-100dB P/B=80 mW/GHz for β=-110dB and P/B=251 mW/GHz for β=-120dB. This means that the ratio P/B should be shifted towards a higher value to overcome a larger pathloss. Through optimizing M we learn how to operate the BS to reach the optimum EE and how to design it with enough antennas depending on the pathloss in its environment.
The resulting signal-to-noise ratio (SNR) is defined as
SNR=M P β/B N_0
and can also be inferred from Theorem <ref>. In this example, the SNR for the optimal solution is SNR=6.00dB if β=-100dB, SNR=5.71dB if β=-110dB, and SNR=6.00dB if β=-120dB. This means that 4-QAM is roughly the optimal modulation scheme. The capacity expression is slightly non-linear at this operating point, even if P and M enter linearly into the PC model.
The EE in (<ref>) is achieved by any values of P and B with the ratio in (<ref>) and that are sufficiently large to make the term μ/B + D_0 M/B negligible. Hence, we have the freedom to choose B to achieve any desired data rate
C = B u log_2(e).
In other words, if P and B are not limited by external factors, the EE and rate are permitted to grow together—there is no tradeoff between them as conventionally claimed <cit.>.
§.§ Power per Antenna
It is also possible to analytically optimize the transmit power per antenna P/M, which leads to further insights. In practice, there might be upper bounds on both of these parameters which prevent us from reaching the optimal ratio. However, in case P and M are not upper bounded, or the maximum of the EE in (<ref>) is reached without invalidating the bound M_max and P_max, the following is true:
If the solution (P_opt,M_opt) that maximizes the EE in (<ref>) for a given value of B satisfies P_opt≤ P_max and M_opt≤ M_max, then the following relation holds:
P_opt/M_opt=κ(D_0+ν B).
We assume B is fixed and define a=β/(BN_0), b=1/κ, c = D_0+ν B. Then, the EE optimization problem with respect to P and M becomes
P, Mminimize μ + bP+ cM/B log_2 ( 1 + a M P ) .
We take the first-order derivatives of the objective function with respect to P and M, and equate them to zero, which leads to
ln(2) bBln(aMP+1) =ln(2) aM·(μ+bP+cM)B·(aMP+1)ln^2(aMP+1),
ln(2) cBln(aMP+1) =ln(2) aP·(μ+bP+cM)B·(aMP+1)ln^2(aMP+1) .
If we divide the second equation by the first equation, we obtain (<ref>).
Theorem <ref> has several interesting implications. The right-hand-side in (<ref>) grows as either κ, D_0, ν or B increase. It is evident that as the computational cost ν B associated with an increased bandwidth grows or the PA is of higher quality (i.e., larger κ), we can afford to transmit more power per antenna when reaching the EE-optimal solution. Moreover, if we can afford to use more antennas to gain higher EE, we should also increase the transmit power to maintain the power per antenna.
Further insights are obtained by rearranging (<ref>) so that
P_opt/κ=(D_0+ν B)M_opt.
We recognize that P/κ + (D_0+ν B)M appear directly in the PC consumption model in (<ref>). Hence, (<ref>) tells us that at the EE-optimal point, the input transmit power P/κ is always identical to the power (D_0+ν B)M, i.e., the passive power consumption in the transceiver chains for all the antennas D_0 M plus the power dissipated in the analog-to-digital and digital-to-analog converters in these transceiver chains ν B M.
We also note that the solution is independent of η and μ.
As a side note, we stress that the solution in (<ref>) and (<ref>) is strictly true only if M is allowed to attain a non-integer value. It will in practice only be approximately true.
§ SINGLE VARIABLE OPTIMIZATION
In this section, we show how to optimize the EE with respect to each of the variables P, M, and B when the other ones are fixed. These results give insights into the solution structure and are the necessary building blocks for developing a joint optimization algorithm in Section <ref>. The first result considers optimizing P.
The EE in (<ref>) for a given B,M is maximized with respect to P by
P = B N_0 e^v-1/M β,
where
v= W (κ M β(μ+(D_0+ν B)M)/B N_0 e -1/e)+1
The EE with respect to P has a form that can be directly maximized by using <cit.>.
By rearranging in (<ref>), we can once again obtain an expression for P/B. However, in this case, v also depends on B, so the result is different from Theorem <ref>.
Next, we optimize B when other parameters are fixed. This process can be interpreted as a carrier bandwidth optimization.
The EE in (<ref>) is a unimodal function of B (for any fixed P,M) that is maximized at a unique B.
Since EE(B)>0 for B>0 and lim_B→∞EE(B)=lim_M→ 0^+EE(B)=0 there exists a single positive solution B_opt that minimizes EE(B). Furthermore, finding B_opt by setting ∂EE/∂ B=0 leads to the equation
(B N_0/M P β( κμ +D_0 M κ + P )+κμ + D_0 M κ + P )
×log_e(1+M P β/B N_0)=
M κν B + κμ + D_0 M κ + P.
This equation has only one solution since its left-hand side goes to ∞ for small B but is always decreasing as B grows and the right-hand side is a positive affine function of B. To show that the left-hand side is a monotonically decreasing function, let us take the derivative of it with respect to B and obtain
(κμ+D_0Mκ+P)(BN_0log_e(1+MPβ/BN_0)-MPβ)MPβB.
The above function is always non-positive since log_e(1+x)≤ x holds for x>0, i.e.,
B N_0log_e(1+MPβ/BN_0)-MPβ≤ 0.
This proves that EE(B) is a unimodal function of B and that the solution B_opt can be obtained numerically (e.g., through a bisection search). No closed-form solution exists.
The results of Lemma <ref> and <ref> are illustrated in Fig. <ref>. In this figure, we consider a fixed number of transmit antennas M=20 and plot the EE for varying values of P and B. The optimal P for a given B,M is solved by Lemma <ref> and shown by the black line. The optimal B for a given P,M is solved by Lemma <ref> and shown by the red line. Furthermore, for large values P and B, the two lines converge to the optimal ratio as explained in Theorem <ref>.
Finally, we optimize M when other parameters are fixed.
The EE in (<ref>) for a given B,P is maximized with respect to M by
M = B N_0 e^w-1/P β,
where
w= W (P β(P/κ+μ)/B N_0 e(D_0 +ν B) -1/e)+1
The EE can be directly maximized by using <cit.>.
In Fig. <ref>, we plot the optimal M given by Lemma <ref> for varying values of B and P. We observe that the optimal M attains a wide range of values, depending on P and B.
§.§ Computational efficiency does not affect the solution
As a corollary to the main results, it is interesting to note that the parameter η (i.e., the PC constant proportional to the achieved rate) has no impact on the optimal parameters (P_opt, B_opt, M_opt).
The optimal solution argument (P_opt, B_opt, M_opt) that maximizes (<ref>) is independent of η.
We define
f= B log_2 ( 1 + M P β/B N_0) /P/κ + μ + ( D_0+ν B )M ,
which is the EE with η=0.
The EE in (<ref>) can then be rewritten as
EE=f/1+η f.
Equating EE'=0 (with respect to any variable) yields
f'(1+η f)-f(η f')/(1+ η f)^2=0,
which has the only solution f'=0. This implies that EE is maximized precisely when f is maximized, so the optimal parameters are the same.
The consequence is that optimizing EE in (<ref>) can be facilitated by letting η=0. A similar observation was made in <cit.> but for a different system model.
§ ALGORITHM FOR OPTIMIZING THE EE
An algorithm that utilizes our previous results in Lemma 1-3 and which converges to the optimal solution is provided in this section. The goal is to solve the following joint EE maximization problem:
P,B,Mmaximize EE
subject to 0 < P ≤ P_max,
0 < B ≤ B_max,
M∈{1,… ,M_max},
with the EE defined as in (<ref>).
The following result on whether P or B should be maximized is needed.
The constrained EE maximization problem in (<ref>) is solved at the boundary where P=P_max or B=B_max.
Let us introduce the variables z and t instead of P and B as z=P/B, t=1/P, and express (<ref>) as
z,t,Mmaximize 1/zlog_2 ( 1 + M β/N_0 z ) /1/κ + μ t + D_0 M t + ν M/z + η/zlog_2 ( 1 + M β/ N_0 z )
subject to z ≥ 1/P_max,
zt ≥ 1/B_max,
M∈{1,… ,M_max}.
We see that the problem (<ref>) without the first and second constraints has the optimal solution t=0 found outside the feasible region. This disproves the existence of an inner point solution to (<ref>) and proves the theorem.
We propose Algorithm 1 to solve this problem. Since Theorem <ref> establishes that the EE is maximized for either maximum possible transmit power or maximum possible bandwidth, we can invoke Lemma <ref> at maximum bandwidth and compare its solution to Lemma <ref> for maximum power. The solution with the highest EE is used. In the next step (row 15), we can use Lemma <ref> to optimize the number of transmit antennas. We can then alternate this optimization until convergence, i.e., again find the best (B,P) as above and again update M. If either P, B, or M becomes too large then it is set to its maximum value. Because the number of transmit antennas should be an integer, in the final step, we consider the two closest possible antenna numbers, compute the respective optimal power and bandwidth, and select the one achieving the highest EE. The updates of the values of P and M through using Algorithm 1 is provided in Fig. <ref>.
§ CONCLUSION
In this study, we have explored the fundamentals of EE optimization for wireless communication links. Our investigation has led to several key insights, shedding light on the intricate relationship between power, bandwidth, and the number of transmit antennas. In scenarios without constraints on power or bandwidth, our analysis demonstrates that the ratio of power to bandwidth converges to an optimal value. This implies that an excess of bandwidth does not necessarily lead to improved EE. On the other hand, it implies that we can achieve any data rate simultaneously with the maximum EE, so there is no fundamental tradeoff.
Our results further emphasize that a finite number of transmit antennas offers the highest EE. A novel intriguing discovery is that the total transmit power equals the total transceiver power for the antennas at the optimum, provided that the transmit power does not exceed the maximum limit. To facilitate the application of our findings, we have designed an algorithm that rapidly converges to the optimal solution to the joint EE maximization with respect to transmit power, bandwidth, and antennas. These findings contribute to a deeper understanding of energy efficiency in this field and guide the development of energy-efficient operation of more complex wireless networks.
IEEEtran
|
http://arxiv.org/abs/2409.02444v1 | 20240904044421 | USV-AUV Collaboration Framework for Underwater Tasks under Extreme Sea Conditions | [
"Jingzehua Xu",
"Guanwen Xie",
"Xinqi Wang",
"Yiyuan Yang",
"Shuai Zhang"
] | cs.RO | [
"cs.RO",
"cs.SY",
"eess.SY"
] |
USV-AUV Collaboration Framework for Underwater Tasks under Extreme Sea Conditions
Jingzehua Xu1^,+,
Guanwen Xie1^,+,
Xinqi Wang2,
Yiyuan Yang3,
Shuai Zhang4
1Tsinghua Shenzhen International Graduate School, Tsinghua University, China
2College of Information Science and Electronic Engineering, Zhejiang University, China
3Department of Computer Science, University of Oxford, United Kingdom
4Department of Data Science, New Jersey Institute of Technology, USA
Email: {xjzh23, xgw24}@mails.tsinghua.edu.cn, [email protected], [email protected], [email protected]
^+ These authors contributed equally to this work.
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Autonomous underwater vehicles (AUVs) are valuable for ocean exploration due to their flexibility and ability to carry communication and detection units. Nevertheless, AUVs alone often face challenges in harsh and extreme sea conditions. This study introduces a unmanned surface vehicle (USV)–AUV collaboration framework, which includes high-precision multi-AUV positioning using USV path planning via Fisher information matrix optimization and reinforcement learning for multi-AUV cooperative tasks. Applied to a multi-AUV underwater data collection task scenario, extensive simulations validate the framework's feasibility and superior performance, highlighting exceptional coordination and robustness under extreme sea conditions. The simulation code will be made available as open-source to foster future research in this area.
Autonomous underwater vehicle, Unmanned surface vehicle, Fisher information matrix, Reinforcement learning, Extreme sea conditions, Underwater tasks
§ INTRODUCTION
Autonomous underwater vehicles (AUVs), especially those multi-AUV systems, have attracted considerable interest for their versatility and mobility in applications such as environmental monitoring, seabed exploration, and biological research <cit.>. The effectiveness of these vehicles in underwater tasks is largely dependent on precise and reliable positioning and highly efficient control strategies <cit.>, particularly in challenging and extreme sea conditions <cit.>. Accurate positioning is crucial as it boosts operational efficiency and safety by preventing potential damage or accidents. Moreover, AUVs equipped with adaptive control policies <cit.> and autonomous decision-making capabilities are better suited to handle dynamic environments <cit.>, providing enhanced resistance to interference and increased practical value <cit.>.
Current positioning techniques for AUVs mainly consist of global positioning systems, inertial navigation systems, and ultrasonic positioning. However, these methods are vulnerable to problems like water refraction and scattering, signal attenuation, transmission delays <cit.>, and error accumulation <cit.>. In addition, traditional control strategies for multi-AUV systems depend heavily on mathematical models or model-based algorithms <cit.>. Model-based approaches face difficulties when controller parameters shift in dynamic environments <cit.>, and both algorithms often struggle to predict the future maneuvers and behaviors of non-cooperative targets, thereby limiting their scalability and adaptability <cit.>.
To address the aforementioned challenges, numerous researchers have concentrated on the development of USV-AUV co-localization and intelligent multi-AUV collaboration. Bahr et al. introduced a distributed algorithm that leverages multiple AUVs to dynamically determine the locally optimal position of the beacon vehicle, utilizing data from the survey vehicle's broadcast communication <cit.>. Vasilijevi et al. constructed the Internet of underwater things using unmanned surface vehicles (USVs) to enhance underwater positioning efficiency <cit.>. Jiang et al. applied a multi-agent proximal policy optimization reinforcement learning (RL) algorithm to direct efficient and energy-saving data collection for AUV swarms in unknown environments based on an objective uncertainty map <cit.>. Wang et al. proposed a collaborative data collection strategy for multi-AUVs employing local-global deep Q-learning and data value, classifying data into urgent and non-urgent categories to facilitate hybrid data collection meeting various temporal demands <cit.>. Nevertheless, these methods encounter limitations in complex underwater environments: as the number of AUVs and sensors increases, the underwater acoustic channel becomes more intricate, computational complexity rises <cit.>, and there are stringent requirements for battery life and operational costs of the equipment. Furthermore, conventional AUV swarm control techniques such as heuristic algorithms <cit.>, neural networks <cit.>, and game theory <cit.>, although effective for specific tasks, heavily rely on extensive prior information. In the absence of such information, the performance of these methods significantly declines, particularly under extreme sea conditions.
Based on the above analysis, in this study we propose a USV-AUV collaboration framework for underwater tasks to improve the performance of the AUV completing tasks under extreme sea conditions. To make a conclusion, the contributions of this paper include the following:
* We realize accurate positioning of AUVs via
USV path planning through minimizing the determinant of the Fisher information matrix (FIM) of the system. Based on this, through integrating environment-aware ability into state space, and USV-AUV collaboration into reward function in the Markov decision process (MDP), we further use RL to empower multi-AUV with intelligence and the adaptability to extreme sea conditions.
* We innovatively leverage two-dimensional tidal wave equations and ocean turbulence model to simulate the extreme sea conditions, which has impact on the positioning accuracy and working efficiency of multi-AUV.
* Through comprehensive experiments in the underwater data collection task, our framework showcases superior feasibility and excellent performance in balancing multi-objective optimization under extreme sea conditions.
§ METHODOLOGY
In this section, we introduce the proposed USV-AUV collaboration framework, which comprises two main components: high-precision location of multi-AUV using USV path planning via FIM optimization, and RL enabled multi-AUV cooperative work. Besides, we also present simulation of extreme sea conditions using two-dimensional shallow water equation and ocean turbulence model.
§.§ USV Path Planning Based on Fisher Information Matrix Optimization
Our framework realizes accurate positioning of AUVs via USV path planning through minimizing the determinant of the system's FIM. Central to this consensus is that FIM's determinant is negatively correlated to the system's uncertainty.
Assume the coordinate of the USV is denoted as (x, y, η), and the coordinates of k-th AUV is (x_k
, y_k, z_k). As illustrated in Fig. 1, the USV, which is positioned on the sea surface, employs an ultra-short baseline (USBL) system to determine the location of the underwater AUV. The arrays are uniformly spaced on the USV such that the distances OXa = OXb = OYb = OYa = d/2, where d represents the array spacing. Consequently, the probability density function of the measured data and the measurement equation can be expressed as follows:
p(𝑍,𝑋) =∏_k = 1^m exp{-1/2[ 𝑍_k-ℎ_k(𝑋)].^T..𝑅^- 1[𝑍_k-ℎ_k(𝑋)] }/√(2 π det(𝑅)),
𝑍_k = ℎ_k(𝑋) + 𝑢_k,
S_k = √((x_k - x)^2 + (y_k - y)^2 + z_k^2),
where the target state vector is denoted by 𝑋=[x,y]^T, while ℎ_k(𝑋)=[. Δφ_x , k , Δφ_y , k].^T stands for the phase difference vector between receiving units, including two elements Δφ_x , k=2 π f d/cS_k(x_k-x) and Δφ_y , k=2 π f d/cS_k(y_k-y), while c represents the speed of sound and f indicates the signal frequency. Additionally, 𝑢_k is characterized as zero-mean Gaussian white noise, and the measurement noise covariance matrix is denoted by 𝑅=σ^2𝐼.
Then FIM of the system is subsequently obtained by determining the second derivative of the log-likelihood function
𝐽_m = [[ - E [∂^2 l n p (𝑍 , 𝑋)/∂ x^2] - E [∂^2 l n p (𝑍 , 𝑋)/∂ x∂ y]; - E [∂^2 l n p (𝑍 , 𝑋)/∂ y∂ x] - E [∂^2 l n p (𝑍 , 𝑋)/∂ y^2] ]].
Assume there are totally m AUVs, the determinant of the FIM can be simplified to the final expression after derivation, which can be expressed as
det(𝐽_m) = (.4 π^2 f^2 d^2/σ^2 c^2.)^2[3 m sin^2γ_0/S_0^4+( sin^4γ_0+ 1 )^2/S_0^4χ],
r_m = argmax {d e t (𝐽_m)},
where sinγ_0=z_k/S_0, while χ=∑_1≤ i j≤ m^m sin^2α_ij, with α_ij=φ_i-φ_j representing the angle between the projections of two AUVs and USV. By maximizing the determinant det(𝐽_m), we can determine the optimal horizontal distance r_m between the USV and multiple AUVs, as illustrated in Fig. 2.
§.§ Reinforcement Learning Enabled Multi-AUV Collaboration
Our framework leverages RL to train multi-AUV for collaborative operations. Rather than solely relying on standard MDP based RL algorithms, we introduce modifications to design of the state space and reward functions in standard MDP. Specifically, we incorporate the ocean current velocity perceived by AUV k into its state space, represented as
s_k=V_c ( P_k(t) ).
Furthermore, we integrate original reward functions with the distance differential between each AUV and the USV, which can be denoted as
r_k(t)=(l_max^k ↔ U / l^k ↔ U(t)),
where l_max^k ↔ U and l^k ↔ U stand for the maximum distance, and indicates the current distance between AUV k and USV. Through extensive epochs of RL training, the collaborative behavior and decision-making capabilities of multi-AUV, enhanced with environment-awareness, will progressively converge to an expert level.
Combining Sections 2-A and 2-B, we finally propose the USV-AUV collaboration framework, whose algorithm pseudo-code has been listed in Algorithm 1.
§.§ Simulation of Extreme Sea Conditions
Given that the USV operates on the surface and the AUVs navigate underwater, the USV-AUV collaboration framework is highly susceptible to disturbances from severe waves and ocean turbulence.
Based on this intuition, our study employs two-dimensional shallow water equations to simulate the sea surface with wave dynamics. If we denote the wave velocity as V_w=[u,v], and the gravitational acceleration as g, the water level η at coordinate point (x^',y^') can be calculated by
∂ u/∂ t + g ∂η/∂ x^' = 0,
∂ v/∂ t + g ∂η/∂ y^' = 0,
∂η/∂ t + ∂(u · h)/∂ x^' + ∂(v · h)/∂ y^' = 0,
where h represents the water depth, and we can further derive the expression
η =R_Lc o s k x^'/c o s k L e^- i ω t,
where R_L denotes the variable associated with the offshore length L, while k =2 π/λ, with wave length λ=2 π/ω√(g h). Consequently, we observe that cos kL becomes zero when L=λ/4, resulting in a significant rise in the water level.
Additionally, this study employs the superposition of multiple viscous vortex functions, derived from the simplified Navier-Stokes equations, to simulate ocean turbulence. The functions are presented as follows:
V_x(P_k(t))=-Γ·y^'-y_0/2 πP_k(t)-P_0_2^2·( 1- e^-P_k(t)-P_0_2^2/δ^2),
V_y(P_k(t))=-Γ·x^'- x_0/2 πP_k(t)-P_0_2^2·( 1- e^-P_k(t)-P_0_2^2/δ^2),
ϖ(P_k(t))=Γ/πδ^2· e^-P_k(t)-P_0_2^2/δ^2,
where P_k(t) and P_0 represent the current location of the AUV k and the coordinate vector of Lamb vortex center, respectively. V_x(P_k(t)) and V_y(P_k(t)) are the velocities of the ocean turbulence on the X and Y axes perceived by AUV k at position P_k(t) at time t, respectively. Meanwhile, ϖ, δ, and Γ stand for the vorticity, radius, and intensity of the vortex, respectively.
Finally, we can employ the finite difference method to simulate sea waves and the ocean turbulence model, which together constitute the extreme sea conditions in this study.
§ EXPERIMENTS
In this section, we verify the effectiveness of the proposed USV-AUV collaboration framework under extreme sea conditions through comprehensive simulation experiments. Furthermore, we present the experimental results with further analysis and discussion.
§.§ Task Description and Settings
Since open-source underwater tasks are scarce, we have chosen the multi-AUV data collection task as a representative example to evaluate our framework. This task involves utilizing multi-AUV to gather data from underwater sensor nodes within the Internet of underwater things (IoUT), with multiple objectives such as maximizing the total data rate, avoiding collisions, and minimizing energy consumption, etc. The parameters used in this paper are detailed in Table 1. For additional detail and parameters related to the task, please refer to the previous work <cit.>.
§.§ Experiment Results and Analysis
To evaluate the feasibility of our framework, we first employed two mainstream RL algorithms, DDPG and SAC, to train two AUVs positioned by the USV on the sea surface to collaboratively complete the underwater data collection task under both ideal and extreme sea conditions, respectively. As illustrated in Fig. 3, the training curves progressively converge to expert-level performance, indicating that the AUVs have successfully acquired the expert policy through RL training. Additionally, we performed environmental generalization experiments to compare performance in both ideal and extreme sea conditions. The corresponding results (here ISC, ESC denote ideal and extreme sea conditions, while SDR, EC, ARPS indicate sum data rate, energy consumption, and average reward per timestep, respectively), presented in Table 2, show that despite the presence of ocean waves and turbulence, the performance remains comparable in both scenarios. This demonstrates the framework's high robustness under extreme sea conditions.
Moreover, we utilized the expert policy trained with the DDPG algorithm to guide the multi-AUV system in an underwater data collection task under extreme sea conditions. The trajectories of the AUVs and the USV during a single RL training episode are depicted in Fig. (4a). To further evaluate the advantages of the USV-AUV collaboration framework, we also presented the positioning error of the multi-AUV system. We examined three different scenarios: employing USV path planning based on FIM optimization, fixing the USV at coordinates (0,0), and fixing it at (100,100). As shown in Fig. (4b), the first approach achieves the lowest positioning error, illustrating the superior performance of USBL positioning via USV path planning using FIM optimization, even under extreme sea conditions.
§ CONCLUSION
In this paper, we propose a USV–AUV collaboration framework. The two parts of our framework, including high-precision
multi-AUV location using USV path planning by FIM optimization and RL training
for multi-AUV cooperative tasks, collaboratively enhance the performance of multi-AUV underwater tasks in extreme sea conditions. Experiment results from underwater data collection task verify the superior feasibility and performance of the proposed framework, which effectively embodies high coordination between USV and AUV while showcasing excellent robustness in extreme sea conditions. To accelerate the relevant research in this field, the code for simulation will be released as open-source in the future.
IEEEtran
10
url@samestyle
1
X. Hou, J. Wang, T. Bai, Y. Deng, Y. Ren, and L. Hanzo, “Environment-aware auv trajectory design and resource management for multi-tier underwater computing,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 2, pp. 474–490, 2023.
2
H. Xing, Y. Liu, S. Guo, L. Shi, X. Hou, W. Liu, and Y. Zhao, “A multi-sensor fusion self-localization system of a miniature underwater robot in structured and gps-denied environments,” IEEE Sensors Journal, vol. 21, no. 23, pp. 27 136–27 146, 2021.
3
J. Du, B. Jiang, C. Jiang, Y. Shi, and Z. Han, “Gradient and channel aware dynamic scheduling for over-the-air computation in federated edge learning systems,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 4, pp. 1035–1050, 2023.
4
Z. Zhang, J. Xu, G. Xie, J. Wang, Z. Han, and Y. Ren, “Environment- and energy-aware auv-assisted data collection for the internet of underwater things,” IEEE Internet of Things Journal, vol. 11, no. 15, pp. 26 406–26 418, 2024.
5
C. Hu, S. Zhu, Y. Liang, Z. Mu, and W. Song, “Visual-pressure fusion for underwater robot localization with online initialization,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 8426–8433, 2021.
6
C. Lin, G. Han, M. Guizani, Y. Bi, J. Du, and L. Shu, “An sdn architecture for auv-based underwater wireless networks to enable cooperative underwater search,” IEEE Wireless Communications, vol. 27, no. 3, pp. 132–139, 2020.
7
J. Xu, Z. Zhang, J. Wang, Z. Han, and Y. Ren, “Multi-auv pursuit-evasion game in the internet of underwater things: An efficient training framework via offline reinforcement learning,” IEEE Internet of Things Journal, pp. 1–1, 2024.
8
L. Zhang, C. Tang, P. Chen, and Y. Zhang, “Gaussian parameterized information aided distributed cooperative underwater positioning algorithm,” IEEE Access, vol. 8, pp. 64 634–64 645, 2020.
10
Z. Fang, J. Wang, J. Du, X. Hou, Y. Ren, and Z. Han, “Stochastic optimization-aided energy-efficient information collection in internet of underwater things networks,” IEEE Internet of Things Journal, vol. 9, no. 3, pp. 1775–1789, 2022.
9
Y. Wu, K. H. Low, and C. Lv, “Cooperative path planning for heterogeneous unmanned vehicles in a search-and-track mission aiming at an underwater target,” IEEE Transactions on Vehicular Technology, vol. 69, no. 6, pp. 6782–6787, 2020.
11
A. Bahr, J. J. Leonard, and A. Martinoli, “Dynamic positioning of beacon vehicles for cooperative underwater navigation,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 3760–3767.
12
A. Vasilijevic, D. Nad, and N. Miskovic, “Autonomous surface vehicles as positioning and communications satellites for the marine operational environment — step toward internet of underwater things,” in 2018 IEEE 8th International Conference on Underwater System Technology: Theory and Applications (USYS), 2018, pp. 1–5.
14
B. Jiang, J. Du, C. Jiang, Z. Han, and M. Debbah, “Underwater searching and multiround data collection via auv swarms: An energy-efficient aoi-aware mappo approach,” IEEE Internet of Things Journal, vol. 11, no. 7, pp. 12 768–12 782, 2024.
15
J. Wang, S. Liu, W. Shi, G. Han, and S. Yan, “A multi-auv collaborative ocean data collection method based on lg-dqn and data value,” IEEE Internet of Things Journal, vol. 11, no. 5, pp. 9086–9106, 2024.
13
J. Du, C. Jiang, J. Wang, Y. Ren, and M. Debbah, “Machine learning for 6g wireless networks: Carrying forward enhanced bandwidth, massive access, and ultrareliable/low-latency service,” IEEE Vehicular Technology Magazine, vol. 15, no. 4, pp. 122–134, 2020.
16
H. Zhao, J. Yan, X. Luo, and X. Guan, “Ubiquitous tracking for autonomous underwater vehicle with iout: A rigid-graph-based solution,” IEEE Internet of Things Journal, vol. 8, no. 18, pp. 14 094–14 109, 2021.
17
K. Zhang, H. Wang, H. Zhang, N. Luo, and J. Ren, “Target tracking of uuv based on maximum correntropy high-order ughf,” IEEE Transactions on Instrumentation and Measurement, vol. 72, pp. 1–16, 2023.
|
http://arxiv.org/abs/2409.03056v1 | 20240904200844 | Skyrmion soliton motion on periodic substrates by atomistic and particle based simulations | [
"J. C. B. Souza",
"N. P. Vizarim",
"C. J. O. Reichhardt",
"C. Reichhardt",
"P. A. Venegas"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
VECA: Reliable and Confidential Resource Clustering for Volunteer Edge-Cloud ComputingThis material is based upon work supported by the National Science Foundation (NSF) under Award Number: OAC-2232889. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF.
Hemanth Sai Yeddulapalli1,
Mauro Lemus Alarcon1,
Upasana Roy1,
Roshan Lal Neupane1,
Durbek Gafurov1,
Motahare Mounesan2,
Saptarshi Debroy2,
Prasad Calyam1
1University of Missouri-Columbia, USA;
2City University of New York, USA
Email:
1{hygw7, lemusm, u.roy, neupaner, durbek.gafurov, calyamp}@missouri.edu;
[email protected];
[email protected]
=======================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Solitons are nonlinear wave perturbations <cit.>
that have been observed in a wide range of different science fields,
including
mathematics <cit.>,
chemistry <cit.>,
magnetism <cit.>,
and biology <cit.>.
Due to their nonlinear spin dynamics,
magnetic systems are particularly well able to stabilize magnetic solitons
or nonlinear magnetic textures,
which can take the form of
magnetic vortices <cit.>,
magnon drops <cit.>,
magnetic skyrmions <cit.>,
hopfions <cit.>,
and bimerons <cit.>.
Solitons or kinks can also be stabilized in assemblies of
particles coupled to a
periodic substrate <cit.>.
When the number of particles equals the number of
potential minima, the system should be free of kinks;
however, if the number of particles
is slightly higher or lower so that the system is off commensuration,
localized kinks appear that depin under applied drive levels which can
be much lower than the drive at which the bulk of the particles depin.
Kink motion at incommensurate fillings on periodic substrates has been
studied for colloidal particles
<cit.>,
superconducting vortices <cit.> and various
friction models <cit.>.
Since skyrmions are also particle-like textures, when they are placed
on a periodic substrate, kinks or solitons could also be stabilized
in the skyrmion lattice.
Recent work by Vizarim et al.<cit.> has shown the
possibility of creating and moving a soliton along
quasi one-dimensional chains
of skyrmions using a particle based model.
After this study, Souza et al.<cit.>
demonstrated with an atomistic model
that soliton motion along skyrmion chains
is stable and that
the soliton exhibited higher velocities than
free skyrmions.
The work on the quasi-one-dimensional systems opened
the possibility of using soliton or kink motion in
magnetic skyrmions as an information transfer method
for new types of soliton-based devices
employing skyrmions. An open question
is whether soliton motion through skyrmion lattices
remains stable
in more realistic fully two-dimensional systems,
whether the soliton behavior can be captured using both particle-based
and atomistic models,
and where the two models agree or disagree.
Magnetic skyrmions are particle-like topologically
protected magnetic textures <cit.>
that exhibit many similarities to overdamped particles:
they minimize their repulsive interactions by forming a
triangular array, can be set in motion
by the application of external drives, and can interact
with material defects in a variety of ways
<cit.>.
The key difference between skyrmions and
other overdamped particles is the presence of a
non-dissipative Magnus force that causes
skyrmions to move in the absence of disorder at
an angle known as the intrinsic skyrmion Hall angle,
θ^int_sk, with respect to the external
driving force
<cit.>.
In order to simulate all of the degrees of freedom of a skyrmion,
it is necessary to
use computationally expensive models, such as
the atomistic model <cit.>, that can capture
behavior such as skyrmion annihilation,
creation and deformation.
To mitigate the computational expense of skyrmion simulations,
Lin et al.<cit.> proposed
a particle-based model for skyrmions that
assumes the skyrmions remain rigid, an approximation that
is valid for low skyrmion densities and low external currents.
Using atomistic simulations and particle based simulations,
we study the dynamical behavior of soliton
motion in magnetic skyrmion lattices on square
and triangular substrates
just away from commensuration that are subjected to an
external driving force.
For the square substrate, both models produce
soliton motion along a 45^∘ angle;
however, the atomistic model exhibits an
additional 30^∘
soliton motion that is absent in the particle based model. At higher
drives, the entire skyrmion lattice depins, and the transitions
between the the different soliton and skyrmion flow phases are visible
as changes in the transport curves and average Hall angle of
the kink or skyrmion motion.
For the triangular substrate, we also find regimes of stable soliton
motion, but the models
show substantial differences.
The atomistic model exhibits soliton motion
along a 30^∘ angle for a wide range of external
driving forces, whereas the particle model produces
soliton motion along a 45^∘ angle for a small
range of external driving forces.
In the particle model, the trajectory of the soliton is
much more meandering, resulting in flow around an average
angle of
45^∘, while
in the atomistic model,
the finite size of the skyrmions reduces the amount of
meandering flow that occurs, causing motion along
30^∘ to be more stable.
§ METHODS
We model Néel skyrmions in thin films
with a magnetic field applied perpendicular
to the film at zero temperature, T=0 K, with
periodic boundary conditions along the x and y directions.
We use two distinct models, the atomistic model
and the particle based model. Common
substrate defect arrangements of N_m defects are used in both
models. The square array of defects is modeled as
ϕ_S(x, y)=A/4[cos(2π x/a_0)+cos(2π y/a_0) + 2],
where A is the defect strength and a_0 is the substrate
lattice constant. The values of A and a_0 are different
between each model and are listed below in the subsections describing
the individual models.
The triangular array of defects is modeled as
ϕ_T(x, y)=∑_i=1^3A/2[cos(2π b_i/a_0) + 1], with b_i=xcos(θ_i)-ysin(θ_i)+a_0/2 and
θ_1=π/6, θ_2=π/2, θ_3=5π/6.
As in the square array, A is
the defect strength and a_0 the substrate lattice constant, with
different
values of A and a_0 used for each model.
For both models we choose values of a_0 such that
there are N_m=36 minima in the defect arrangement.
We select the number of skyrmions N_sk to be just above
commensuration with the substrate,
N_sk=N_m+1=37.
In fig. <ref>(a,b), we show a three-dimensional rendering
of the square and triangular defect arrays. Each potential minimum captures a
single skyrmion, and we add one additional skyrmion to the sample in
order to create
a kink or soliton
that can depin at a much lower drive
than the commensurate skyrmions.
§.§ Atomistic Simulations
The atomistic model tracks the state of individual
atomic magnetic moments <cit.>.
The Hamiltonian describing the interactions of
thin films at T=0K with an underlying square
atomic arrangement with lattice parameter a=0.5 nm is
given by<cit.>
ℋ= -∑_i, j∈ NJ_ij𝐦_i·𝐦_j
-∑_i, j∈ N𝐃_ij·(𝐦_i×𝐦_j)
-∑_iμ𝐇·𝐦_i
-∑_i K(x_i, y_i)(𝐦_i·𝐳̂)^2 .
The first term on the right side is the exchange interaction
between the nearest neighbors contained in the set N,
with an exchange constant of J_ij=J between magnetic moments
i and j.
The second term is the interfacial Dzyaloshinskii–Moriya
interaction, where 𝐃_ij=Dẑ×𝐫̂_ij is the Dzyaloshinskii–Moriya
vector between magnetic moments i and j and 𝐫̂_ij
is the
unit distance vector between sites i and j.
The third term is the Zeeman interaction with an applied external magnetic
field 𝐇.
Here μ=ħγ is the magnitude of the magnetic moment
and γ=1.76×10^11T^-1s^-1 is the electron
gyromagnetic ratio. The last term represents the
perpendicular magnetic anisotropy (PMA) of the sample, where
x_i and y_i are the spatial coordinates of the i
magnetic moment. We use
K(x_i, y_i)=ϕ_S(x_i, y_i) for a square array of defects
and K(x_i, y_i)=ϕ_T(x_i, y_i) for a triangular array of defects.
In ultrathin films,
long-range dipolar interactions can be neglected <cit.>.
The time evolution of the individual
atomic magnetic moments is given by the LLG
equation <cit.>
∂𝐦_i/∂ t=-γ𝐦_i×𝐇^eff_i
+α𝐦_i×∂𝐦_i/∂ t
+pa^3/2e(𝐣·∇)𝐦_i .
Here
𝐇^eff_i=-1/ħγ∂ℋ/∂𝐦_i
is the effective magnetic field including all interactions from
the Hamiltonian, α is the Gilbert damping, and the last
term is the spin-transfer-torque (STT), where
p is the spin polarization,
e is the electron charge,
and 𝐣=j𝐱̂ is the applied current density.
The STT current assumes that the conduction electron
spins are always parallel to
the magnetic moments 𝐦 <cit.>, and
the driving force from the STT current <cit.>
acts perpendicular to 𝐣.
We fix
α=0.3, J=1 meV, D=0.5J, and μ𝐇=0.6(D^2/J)(-𝐳̂).
The resulting skyrmions
move at an intrinsic skyrmion Hall angle of
θ_sk^int=64^∘ with respect to the driving
force exerted by external currents.
For both
the square and triangular defect arrays
we use A=0.1J and a_0=14 nm.
The sample dimensions are 84 nm × 84 nm for the square
array of defects, and (2/√(3))84 nm × 84 nm for the
triangular array of defects. The difference in
sample size is required to properly apply boundary conditions.
§.§ Particle Based Simulations
The particle based simulations are governed by the
equation of motion<cit.>
α_d𝐯_i+α_m𝐳̂×𝐯_i=∑_i≠ j𝐅_sk(𝐫_ij)+𝐅_P(𝐫_i)+𝐳̂×𝐅_D ,
where 𝐯_i is the velocity of skyrmion i.
The first term on the left side is the damping term,
α_d, which can be written<cit.>
as α_d=-α𝒟,
where α is the Gilbert damping and 𝒟 is the
dissipative tensor. The second term is the Magnus force
where the Magnus strength α_m can be
written<cit.>
as α_m=-4π Q, where Q is the skyrmion charge. The
ratio α_m/α_d determines the intrinsic skyrmion
Hall angle θ_sk^int=arctan(α_m/α_d).
In order to match
the particle based θ_sk to the atomistic
θ_sk,
we use values of α_m and α_d such
that θ_sk^int=arctan(α_m/α_d)=64^∘.
The first term on the right side of eq. <ref> is the skyrmion-skyrmion
interaction given by
𝐅_sk(𝐫_ij)=-U_skK_1(r_ij)𝐫̂_ij, where U_sk=1 is the
interaction strength and K_1 is the modified first order Bessel function.
The second term is the interaction with the underlying
substrate potential, given by
𝐅_P(𝐫_i)=-∇ϕ_S(𝐫_i)
for the square array of defects and
𝐅_P(𝐫_i)=-∇ϕ_T(𝐫_i)
for the triangular array of defects.
In both defect arrays, the potential strength is A=4 and
the substrate lattice constant is a_0=6.
The last term is the interaction with an external drive,
𝐅_D=F_D𝐱̂, which is in accordance
with the action of an STT current on
magnetic skyrmions <cit.>.
Our simulation box is of size
36 × 36 for the square defect array
and (2/√(3))36×36 for the triangular defect array.
§ SQUARE ARRAY
We first compare the dynamics of skyrmions
interacting with a square array of defects in atomistic and
particle-based simulations.
In fig. <ref>(a), we plot ⟨ v_x⟩ and ⟨ v_y⟩
versus applied current j, and in fig. <ref>(b), we show the
corresponding effective θ_sk=arctan(⟨ v_y⟩/⟨ v_x⟩)
versus j from the atomistic simulation.
We observe four dynamical phases: a pinned phase I with no
skyrmion motion,
phase II_a where a soliton moves along 45^∘,
phase II_b in which a soliton
moves at 30^∘, and phase V in which
all of the skyrmions depin and move without locking to any direction.
The skyrmion motion in phases
II_a and II_b is illustrated in fig. <ref>(a, b).
Figure <ref>(c, d) shows ⟨ v_x⟩, ⟨ v_y⟩,
and θ_sk versus F_D from particle based simulations
of skyrmions
interacting with a square array of defects.
We again observe four dynamical phases, which are a
pinned phase I,
phase II_a with soliton motion along 45^∘,
phase IV_a in which all of the skyrmions
depin and move along
θ_sk=45^∘,
and phase V where all
of the skyrmions have depinned but move without locking to
any direction.
The skyrmion motion in phases II_a and IV_a is illustrated
in fig. <ref>(c, d).
Figure <ref>(a) shows the skyrmion trajectories
during the phase II_a soliton motion along
45^∘ from
the atomistic model.
Here, the extra skyrmion shares an anisotropy minimum with another
skyrmion, and the skyrmion-skyrmion interaction force between the
two lowers the depinning force at which one of the skyrmions can
escape from the minimum and become an interstitial skyrmion.
The interstitial skyrmion moves across the anisotropy landscape
until it reaches another anisotropy
minimum containing a pinned skyrmion.
Through skyrmion-skyrmion interactions,
the interstitial skyrmion pushes the pinned skyrmion
out of the anisotropy minimum and takes up residence in the minimum.
The displaced skyrmion becomes the new
interstitial skyrmion, and this pattern of motion repeats indefinitely.
In fig. <ref>(b) we illustrate the
skyrmion trajectories during the phase II_b soliton
motion along 30^∘ using the atomistic model.
The mechanism
of motion is identical to that observed in fig. <ref>(a),
but now the interstitial skyrmion encounters a pinned
skyrmion along the 30^∘ angle line instead of
the 45^∘ angle line.
Figure <ref>(c) shows the skyrmion trajectories during the
phase II_a soliton motion at 45^∘ obtained from
the particle based model. The
behavior is identical to that found in the atomistic model,
where the skyrmion-skyrmion interactions cause the extra skyrmion
trapped inside a potential minimum to depin at low
F_D.
Since the skyrmion is treated as
a point particle, the trajectory differs in detail from the
atomistic trajectory shown
in fig. <ref>(a); however,
the mechanism of exchange between interstitial and pinned skyrmions
remains the same.
In fig. <ref>(d) we show
the skyrmion trajectories in phase IV_a from the
particle based model
where all of the skyrmions have depinned and
move along 45^∘.
Here the extra skyrmion does not play
a major role, since all
of the skyrmions are
interacting with the potential in an ordered manner and following
a 45^∘ trajectory. The skyrmion lattice is similar
to a moving crystal but contains a localized defect produced by
the extra skyrmion.
We note that previous work on skyrmions
moving over a two-dimensional square periodic
substrate under an increasing drive
showed a directional locking effect in which
the skyrmion motion locked to particular symmetry angles of the
substrate <cit.>.
Continuum models for individual
skyrmions on antidot lattices
also produce similar directional
locking
<cit.>.
The results we describe here are different
in that the motion is not of individual
continuously moving skyrmions
but of kink traveling through a skyrmion lattice,
so the locking is a collective effect instead of a single
particle effect.
Additionally, the applied drive needed
to produce kink motion is substantially lower
that the drive at which an isolated skyrmion would depin from
the substrate potential minimum. The reduction in the
driving
threshold would be particularly useful for applications.
§ TRIANGULAR ARRAY
We next compare the dynamical behavior of skyrmions on
a triangular defect array
in atomistic simulations and particle based simulations.
For the atomistic simulations,
fig. <ref>(a) shows ⟨ v_x⟩ and ⟨ v_y⟩
versus j, while in fig. <ref>(b) we plot the corresponding
θ_sk versus j.
We observe three dynamical phases.
At low drives, we find a pinned phase I.
There is a transitional phase III_a of disordered soliton
motion, where soliton transport occurs but does not follow a
well defined direction.
At higher drives, phase II_b appears in which the soliton
moves along
30^∘. Over the range of j values that we consider, we
never observe a completely depinned state, but we expect that for
larger values of j,
the pinned skyrmions would eventually escape from
the anisotropy minima and move throughout the sample.
The skyrmion trajectories
in phases III_a and II_b are illustrated
in fig. <ref>(a, b).
In fig. <ref>(c), we plot ⟨ V_x⟩ and
⟨ V_y⟩ versus F_D for a particle based simulation
of skyrmions driven over a triangular defect array, and
in fig. <ref>(d) we show the corresponding
θ_sk versus F_D.
Five dynamical phases appear.
At low drives, we find a pinned phase I.
Phase II_a is soliton motion along
45^∘, and it is followed by a transitional phase III_b in which
a small number of skyrmions are present simultaneously and move
chaotically across the system.
In phase IV_b,
all of the skyrmions are moving along 30^∘,
and phase V consists of
all of the skyrmions moving without locking to any direction.
Illustrations of the skyrmion trajectories
for phases II_a and IV_b appear in fig. <ref>(c, d).
Unlike what we found for the square defect array,
here the atomistic model and the particle based model
do not exhibit good agreement.
A pinned phase is present in both cases, but as we increase
the drive, in the atomistic simulations
we observe a wide phase II_b 30^∘ soliton
motion over the
range
4.8×10^10A m^-2≤ j≤ 12×10^10A m^-2.
In comparison, for the particle based simulations
there is a small window of phase II_a
45^∘ soliton motion over the range
0.66≤ F_D≤ 0.88.
The phase II_b 30^∘ angle soliton motion is
absent in the particle based simulations.
This large difference in behavior is likely due to the fact that
in the atomistic simulations, the skyrmion has a
finite size and is able to deform, as is visible by
comparing the phase II_a flow in fig. <ref>(a)
to that in fig. <ref>(c), or
comparing the phase II_b flow in fig. <ref>(b)
to the phase II_a flow in fig. <ref>(c).
For higher values of F_D, the behavior of the atomistic and
particle based simulations diverge significantly.
The particle based simulation
produces phase IV_b flow in which
all of the skyrmions move along 30^∘, followed by
phase V flow in which all of the skyrmions move without locking
to a particular direction,
while the atomistic simulations remain trapped in phase II_b
with soliton motion along 30^∘.
Figure <ref>(a) shows the trajectories of skyrmions
interacting
with a triangular defect array from atomistic
simulations performed at j=4×10^10A m^-2, corresponding to
phase III_a in fig. <ref>(a, b). The soliton mechanism of motion
described previously still occurs here, but the soliton does not
lock to any angle and gradually works its way all around the sample.
When we increase the external current to j=8×10^10A m^-2,
corresponding to phase II_b in fig. <ref>(a, b),
the skyrmions move as illustrated in fig. <ref>(b).
Again, we observe a soliton motion, but unlike what was shown
in fig. <ref>(a),
the motion occurs along a well defined angle of 30^∘.
This motion
appears
over the range 4.8×10^10A m^-2≤ j≤ 12×10^10A m^-2.
In fig. <ref>(c) we illustrate the skyrmion trajectories for motion
over a triangular defect array from
particle based simulations
with F_D=0.8, corresponding to the phase II_a flow
in fig. <ref>(c, d).
For this value of
F_D, a soliton moves across the sample at
45^∘.
We observe the 45^∘ soliton motion only
in the particle based simulations and do not find it in the
atomic simulations.
When we increase F_D to F_D=1.5,
corresponding to phase IV_b in fig. <ref>(c, d),
all of the skyrmions flow along 30^∘,
as shown in fig. <ref>(d).
Similar to what we found in
fig. <ref>(d), the skyrmion lattice travels as a moving
crystal that contains
a localized defect produced by
the extra skyrmion.
The difference in the angle of soliton motion on a triangular
defect array between the atomistic and particle based models
can be explained
by the finite skyrmion size in
the atomistic simulations.
The barrier between anisotropy minima is enhanced when the skyrmion
has a finite size, whereas in the particle based simulations,
the pointlike nature of the skyrmions gives a reduced barrier
between anisotropy minima.
The larger barrier potential created by the finite
skyrmion size can be observed
by comparing the
trajectory of an interstitial skyrmion as it
passes between two anisotropy minima;
the trajectories in the atomistic simulation
exhibit fewer meanders compared to the trajectories in the particle
based simulations.
The increase in the barrier potential
forces the atomistic interstitial
skyrmion to move following the 30^∘ angle imposed by the
triangular lattice,
whereas the interstitial skyrmion in the
particle based model can travel along a wider range of paths
between substrate minima.
To compensate for the overly unconstrained mobility of
skyrmions the particle
based model, the barrier between potential minima can be increased
either by
increasing
the skyrmion-skyrmion interaction strength U_sk or
reducing the lattice constant of the substrate potential.
In principle, it may be possible to identify simulation parameters
for the particle model of the triangular substrate that would match
the behavior of the atomistic simulations and compensate for the
rigidity and vanishing size of the particle-based skyrmion model.
§ SUMMARY
We compared the results of atomistic simulations and particle based
simulations of soliton motion for skyrmion assemblies just past
commensuration on both square and triangular substrates.
For the square array, both models agree well at low and high
drives, but differ for intermediate drives.
The atomistic model produces
a pinned phase, soliton motion along 45^∘,
soliton motion along 30^∘, and unlocked flow of all skyrmions.
The soliton motion proceeds via the replacement
by an interstitial skyrmion of
a pinned skyrmion in an anisotropy minimum, with the depinned skyrmion
becoming the new interstitial skyrmion.
The particle based model produces
a pinned phase, soliton motion along 45^∘,
a phase where all of the skyrmions move
along 45^∘, and unlocked flow of all skyrmions.
The particle based model
does not exhibit the 30^∘ soliton motion
found in the atomistic
simulations, and the trajectories in the particle based model meander
more than the atomistic model trajectories due to the rigidity and
vanishing size of the particle based skyrmions.
For both models, the different dynamic phases are visible
as signatures
in the velocity-force curves, skyrmion Hall angle, and skyrmion trajectories.
The atomistic and particle based models do not agree well on the
motion of skyrmions over
a triangular defect array.
The atomistic model produces
a pinned phase, a transitional phase in which a soliton moves with
no well defined angle,
and a regime of soliton motion at
30^∘ that
spans a wide range of external drive values.
The particle based model exhibits
a pinned phase, soliton motion along 45^∘,
a transitional phase in which all skyrmions participate in disordered
soliton motion,
a phase in which all of the skyrmions move
along 30^∘, and unlocked motion of all of the skyrmions.
Here only the pinned phase is common
between the two models.
Our results provide a better understanding of the
regimes in which the particle model is a good or a poor approximation
for the skyrmion motion.
We argue that it can be possible to mitigate the approximations
made in the particle based model by adjusting the strength of the
interactions between the skyrmions or modifying the lattice constant
of the substrate.
Our results will be beneficial for determining how to
control skyrmion soliton
motion using a combination of anisotropy trapping and
external driving.
We gratefully acknowledge the support of the U.S. Department of
Energy through the LANL/LDRD program for this work.
This work was supported by the US Department of Energy through the Los Alamos National Laboratory. Los
Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security
Administration of the U. S. Department of Energy (Contract No. 892333218NCA000001).
J.C.B.S acknowledges funding from Fundação de Amparo à Pesquisa do Estado de São Paulo - FAPESP (Grant 2023/17545-1).
We would like to thank Dr. Felipe F. Fanchini for providing the computational resources used in this work.
These resources were funded by the Fundação de Amparo à Pesquisa do Estado de São Paulo - FAPESP (Grant: 2021/04655-8).
eplbib
|
http://arxiv.org/abs/2409.03245v1 | 20240905044736 | UAV (Unmanned Aerial Vehicles): Diverse Applications of UAV Datasets in Segmentation, Classification, Detection, and Tracking | [
"Md. Mahfuzur Rahman",
"Sunzida Siddique",
"Marufa Kamal",
"Rakib Hossain Rifat",
"Kishor Datta Gupta"
] | cs.CV | [
"cs.CV"
] |
[
Amy X. Zhang
September 5, 2024
=====================
§ ABSTRACT
Unmanned Aerial Vehicles (UAVs), have greatly revolutionized the process of gathering and analyzing data in diverse research domains, providing unmatched adaptability and effectiveness. This paper presents a thorough examination of Unmanned Aerial Vehicle (UAV) datasets, emphasizing their wide range of applications and progress. UAV datasets consist of various types of data, such as satellite imagery, images captured by drones, and videos. These datasets can be categorized as either unimodal or multimodal, offering a wide range of detailed and comprehensive information. These datasets play a crucial role in disaster damage assessment, aerial surveillance, object recognition, and tracking. They facilitate the development of sophisticated models for tasks like semantic segmentation, pose estimation, vehicle re-identification, and gesture recognition. By leveraging UAV datasets, researchers can significantly enhance the capabilities of computer vision models, thereby advancing technology and improving our understanding of complex, dynamic environments from an aerial perspective. This review aims to encapsulate the multifaceted utility of UAV datasets, emphasizing their pivotal role in driving innovation and practical applications in multiple domains.
§ INTRODUCTION
Unmanned Aerial Vehicles (UAVs)<cit.>, commonly referred to as drones, have revolutionized the way we collect and analyze data from above, offering unparalleled versatility and efficiency across various research fields. This review paper aims to explore the "Multiple Uses of UAV Datasets" by examining the diverse applications and advancements facilitated by these datasets. UAV datasets encompass a wide array of data types, including satellite imagery, drone-captured images, and videos, as well as images from other aerial vehicles like helicopters. These datasets can be unimodal, focusing on a single type of data, or multimodal, integrating multiple data types to provide deeper, more comprehensive insights.
UAV datasets have proven to help assess disaster damage because they allow for the classification of damage from natural disasters using sophisticated semantic segmentation and annotation techniques. By training computer vision models with these datasets, researchers can automate the aerial scene classification of disaster events, significantly enhancing response and recovery efforts. The ability to extract information and detect objects from UAV-captured data is pivotal for tasks such as action recognition, where human behavior is analyzed from aerial imagery, including recognizing aerial gestures and classifying disaster events.
A critical application of UAV datasets lies in 'Aerial Surveillance'<cit.>, which supports advanced research at the intersection of computer vision, robotics, and surveillance. These datasets are used for event recognition in aerial videos, aiding in the monitoring of urban environments and traffic systems. The use of pre-trained models and transfer learning techniques further amplifies the utility of UAV datasets, allowing for the rapid deployment of sophisticated models for event recognition and tracking.
In the context of urban surveillance, UAV datasets enhance object recognition capabilities by providing comprehensive views from both top-down and side perspectives. This facilitates tasks such as categorization, verification, object detection, and tracking of individuals and vehicles. Moreover, UAV datasets contribute significantly to understanding and managing forest ecosystems by addressing the challenge of segmenting individual trees, which is crucial for sustainable forest management.
The versatility of UAV datasets extends to various domains, including developing speech recognition systems for UAV control using video capture and object tracking in low-light conditions, which is essential for night-time surveillance operations. Innovative UAV designs, such as bionic drones with flapping wings, have also led to specialized video datasets used for single object tracking (SOT)<cit.><cit.>, demonstrating the broad scope and potential of UAV datasets in enhancing real-time object tracking under varying lighting conditions.
Overall, UAV datasets represent a cornerstone for cutting-edge research and practical applications across multiple disciplines. This review will delve into the specific uses and benefits of these datasets, highlighting their role in advancing technology and improving our understanding of complex, dynamic environments from an aerial perspective.
The subsequent sections provide a comprehensive exposition of the contributions made by our study, which can be stated as follows:
* Our study is driven by the increasing importance of UAV datasets in several research domains such as object detection, traffic monitoring, action identification, surveillance in low-light conditions, single object tracking, and forest segmentation utilizing point cloud or LiDAR point process modeling. Through an in-depth analysis of current datasets, their uses, and prospects, this paper intends to provide valuable insights that will assist researchers in harnessing these resources for creative solutions. Furthermore, they will acquire knowledge of existing constraints and prospective opportunities, enhancing their research endeavors.
* We conducted an extensive analysis of a dataset consisting of 15 Unmanned Aerial Vehicles (UAVs), showcasing its diverse applications in research.
* We emphasized the applications and advancements of several novel methods utilizing these datasets based on unmanned aerial vehicles (UAVs).
* Our study also delved into the potential for future research and the feasibility of utilizing these UAV datasets, engaging in in-depth discussions on these topics.
§ LITERATURE REVIEW
An unmanned aircraft or UAV, functions without a human pilot on board and can be operated remotely by a human controller or independently by onboard computers. Drones are a common term used to describe UAVs. Drones are employed for various purposes, including surveillance, aerial photography, agriculture, environmental monitoring, and military operations. However, within the UAV dataset context, the term encompasses more than just drones. UAV datasets encompass not only drone image and video datasets, but also include satellite imagery. Table<ref> and <ref> shows the summary of the literature review performed.
These papers were reviewed to determine the definition and range of applications of UAVs in computer vision.
§.§ RescueNet
Maryam Rahnemoonfar, Tashnim Chowdhury, and Robin Murphy presented the RescueNet<cit.> dataset in their paper, which focuses on post-disaster scene understanding using UAV imagery. The dataset contains high-resolution images with detailed pixel-level annotations for ten classes of objects, including buildings, roads, pools, and trees, which were collected by sUAVs following Hurricane Michael. The authors employed state-of-the-art segmentation models like Attention UNet<cit.>, PSPNet<cit.>, and DeepLabv3<cit.>, achieving superior performance with attention-based and transformer-based methods. The findings demonstrated RescueNet's effectiveness in improving damage assessment and response strategies, with transfer learning outperforming other datasets like FloodNet<cit.>. The dataset was observed to have limited generalization to other domains and to require a time-consuming annotation process, despite its detailed annotations.
§.§ UAV-Human
Tianjiao Li et al. developed the UAV-Human<cit.> dataset, a comprehensive benchmark for improving human behavior understanding with UAVs. The dataset contains 67,428 multi-modal video sequences with 119 subjects for action recognition, 22,476 frames for pose estimation, 41,290 frames for person re-identification with 1,144 identities, and 22,263 frames for attribute recognition, all captured over three months in various urban and rural locations under varying conditions. The data encompasses RGB videos, depth maps, infrared sequences, and skeleton data. The authors used methods such as HigherHRNet<cit.>, AlphaPose<cit.>, and the Guided Transformer I3D framework to recognize actions while addressing fisheye video distortions<cit.><cit.> and leveraging multiple data modalities. The results demonstrated the dataset's effectiveness in improving action recognition, pose estimation, and re-identification tasks, with models showing significant performance improvements. The UAV-Human dataset stands out as a reliable benchmark, encouraging the creation of more effective UAV-based human behavior analysis algorithms.
§.§ AIDER
Christos Kyrkou and Theocharis Theocharides introduced the AIDER<cit.> dataset, which is intended for disaster event classification using UAV aerial images. The dataset contains 2,565 images of Fire/Smoke, Flood, Collapsed Building/Rubble, Traffic Accidents, and Normal cases, which were manually collected from various sources, mainly from UAVs. To increase variability and combat overfitting, images were randomly augmented with rotations, translations, and color shifting. The paper presents ERNet, a lightweight CNN designed for efficient classification on embedded UAV platforms. ERNet, which uses components from architectures such as VGG16<cit.>, ResNet<cit.>, and MobileNet<cit.>, incorporates early downsampling to reduce computational costs. When tested on both embedded platforms attached to UAVs and desktop CPUs, ERNet achieved almost perfect accuracy (90%) while running three times faster on embedded platforms. This showed that it is a good choice for real-time applications that do not need a lot of memory. The study emphasizes the benefits of combining ERNet with other detection algorithms to improve situational awareness in emergency response.
§.§ AU-AIR
In their paper Ilker Bozcan and Erdal Kayacan present the AU-AIR<cit.> dataset, a comprehensive UAV dataset designed for traffic surveillance. The dataset comprises 32,823 labeled video frames with annotations for eight traffic-related object categories, along with multi-modal data including GPS coordinates, altitude, IMU data<cit.>, and velocity. To establish a baseline for real-time performance in UAV applications, the authors train and evaluate two mobile object detectors on this dataset: YOLOv3-Tiny<cit.> and MobileNetv2-SSDLite<cit.>. The findings highlight the difficulties of object detection in aerial images, emphasizing the importance of datasets tailored to mobile detectors. The study highlights the dataset's potential for furthering research in computer vision, robotics, and aerial surveillance, while also acknowledging limitations and suggesting future improvements for broader applicability.
§.§ ERA
Lichao Mou et al. introduced the ERA<cit.> dataset, a comprehensive collection of 2,864 labeled video snippets for 24 event classes and 1 normal class, designed for event recognition in UAV videos. The videos, sourced from YouTube, are 5 seconds long, 640×640 pixels, and run at 24 fps, ensuring a diverse dataset that includes both high-quality and extreme condition footage. The paper employs various deep learning models, including VGG-16, ResNet-50, DenseNet-201<cit.>, and video classification models like I3D-Inception-v1, to benchmark event recognition. DenseNet-201 achieved the highest performance with an overall accuracy of 62.3% in single-frame classification. The findings highlight the difficulties of recognizing events in a variety of environments and scales, noting that while models can identify specific events such as traffic congestion and smoke, they struggle with conditions such as night and snow scenes, indicating the need for improved attribute recognition and temporal cue exploitation in future research.
§.§ UAVid
Ye Lyu et al. introduced the UAVid<cit.> dataset in their paper which addresses the need for semantic segmentation in urban scenes from the perspective of UAVs. The UAVid dataset consists of 30 video sequences with 4K high-resolution images, which capture top and side views for improved object recognition and include 8 labeled classes. The paper highlights the challenges of large-scale variation, moving object recognition, and temporal consistency. The effectiveness of deep learning techniques, such as the Multi-Scale-Dilation net which is a novel technique proposed by the author, was evaluated and resulted in an average Intersection over Union<cit.> (IoU) score of approximately 50%. Further enhancements were observed by employing spatial-temporal regularization methods like FSO<cit.> and 3D CRF<cit.>. The dataset's applicability extends to traffic monitoring, population density analysis, and urban greenery monitoring, showcasing its potential for diverse urban surveillance applications. The paper also discusses the dataset's class imbalance and suggests future expansions and optimizations to enhance its utility for semantic segmentation and other UAV-based tasks.
§.§ VRAI
Peng Wang et al. introduced the VRAI<cit.> dataset, the largest vehicle re-identification (ReID) dataset with over 137,613 images of 13,022 vehicles. This UAV-based dataset includes annotations for unique IDs, color, vehicle type, attributes, and distinguishing features, capturing a wide range of view angles and poses from UAVs flying between 15m and 80m. The study devised an innovative vehicle ReID algorithm that utilizes weight matrices, weighted pooling, and comprehensive annotations to identify distinctive components. This algorithm surpasses both the baseline and the most advanced techniques currently available. The paper utilizes a comprehensive strategy to perform vehicle ReID using aerial images, showcasing its effectiveness through a range of experiments. Ablation study results demonstrate that the novel Multi-task + DP model, which integrates attribute classification and additional triplet loss on weighted features, exhibits superior performance compared to less complex models. The proposed method outperforms ground-based methods such as MGN<cit.>, RNN-HA<cit.>, and RAM<cit.>, because it can easily handle different view angles in UAV images. Weighted feature aggregation improves performance, as evidenced by the enhanced mean average precision (mAP) and cumulative match characteristic (CMC) metrics. Human performance evaluation highlights the algorithm's strength in fine-grained recognition, though humans still excel in detailed tasks. The study suggests further research to improve flexibility, scalability, and real-world application of the algorithm.
§.§ FOR-Instance
For semantic and instance segmentation of individual trees, Stefano Puliti et al. presented the FOR-Instance<cit.> dataset in their paper "FOR-Instance: a UAV laser scanning benchmark dataset for semantic and instance segmentation of individual trees." This dataset fills a gap in the market for ML-ready datasets and standardized benchmarking infrastructure by offering publicly accessible annotated forest data for point cloud segmentation<cit.> tasks. The primary goal is to use data from unmanned aerial vehicle (UAV) laser scanning to precisely identify and separate individual trees. The dataset includes extensive annotations that are used for training and evaluation, and it is composed of five carefully chosen collections from different types of forests worldwide. In the context of deep learning, the dataset is divided into separate sets for the purpose of training and validation. In image segmentation research, rasterized canopy height models are utilized, along with either unprocessed point clouds or two-dimensional projections. The FOR-Instance dataset was found to be useful for studying and testing advanced segmentation methods. This highlights the significance of comprehending forest ecosystems and formulating sustainable management techniques. The standardization of the dataset in 3D forest scene segmentation research helps to address current methodological limitations, such as overfitting and lack of comparability.
§.§ VERI-Wild
Yihang Lou et al. presented the VERI-Wild<cit.> dataset, the largest vehicle ReID dataset to date, in their paper. Over 400,000 photos of 40,000 vehicle IDs are included in the dataset, which was collected over the course of a month in an urban district using 174 CCTV cameras. The dataset poses a formidable challenge for ReID algorithms due to its inclusion of diverse conditions such as varying backgrounds, lighting, obstructions, perspectives, weather, and vehicle types. The authors introduced FDA-Net, a novel technique for vehicle ReID, to enhance the model's ability to distinguish between different vehicles. FDA-Net combines a feature distance adversary network with a hard negative generator and embedding discriminator. After being tested on the VERI-Wild dataset and other established datasets, FDA-Net surpassed various standard methods, achieving higher accuracies in Rank-1 and Rank-5. This demonstrates the effectiveness of FDA-Net in vehicle ReID tasks. The method's ability to generate hard negatives significantly improved model performance, highlighting its potential for advancing vehicle ReID research in real-world scenarios.
§.§ UAV-Assistant
G. Albanis and N. Zioulis et al. introduced the UAV-Assistant<cit.> (UAVA) dataset in their paper. The dataset was created using a data synthesis pipeline to generate realistic multimodal data, including exocentric and egocentric views from UAVs. The dataset can be utilized to train a model that can estimate the pose of an individual by incorporating a novel smooth silhouette loss in addition to a direct regression objective. The dataset can be used to train a model that can accurately determine the position of a person by incorporating a unique smooth silhouette loss along with a direct regression objective. It also uses differentiable rendering techniques to help the model learn from both real and fake data. The study highlights the critical role of tuning the kernel size for the smoothing filter to optimize model performance. The suggested smooth silhouette loss surpasses conventional silhouette loss functions by reducing discrepancies and enhancing the accuracy of 3D pose estimation. This approach specifically tackles the lack of available data for estimating the three-dimensional position and orientation of unmanned aerial vehicles (UAVs) in non-hostile environments. It is different from existing datasets that primarily focus on remote sensing or drones with malicious intent. The paper underscores the need for further research on rendering techniques, parameter optimization, and real-world validations to enhance the model's generalizability and robustness.
§.§ KITE
The KITE<cit.> dataset, created to improve speech recognition systems for UAV control, was presented by Dan Oneata and Horia Cucu in their paper. The KITE eval dataset is a comprehensive collection that includes 2,880 spoken commands, along with corresponding audio and images. It is specifically designed for UAV operations and covers a range of commands related to movement, camera usage, and specific scenarios. The authors employed time delay neural networks<cit.> (which is implemented in Kaldi<cit.>) and recurrent neural networks to perform language modeling. They initialized the models with out-of-domain datasets and subsequently fine-tuned them for UAV tasks. The study emphasizes the efficacy of customizing language models for UAV-specific instructions, showcasing substantial enhancements in speech recognition precision through domain adaptation. Future directions include grounding uttered commands in images for enhanced context understanding and improving the acoustic model's robustness to outdoor noises.
§.§ UAV-Gesture
A. Perera et al. introduced the UAV-Gesture<cit.> dataset, which addresses the lack of research on gesture-based UAV control in outdoor settings. This dataset aims to fill the existing research gap, as most studies in this field are focused on indoor environments. The dataset consists of 119 high-definition video clips, totaling 37,151 frames, captured in an outdoor setting using a 3DR Solo UAV and a GoPro Hero 4 Black camera. The dataset comprises annotations of 13 body joints and gesture classes for all frames, encompassing gestures appropriate for UAV navigation and command. The dataset was captured with variations in phase, orientation, and camera movement to augment realism. The authors employed an extended version of the VATIC<cit.> tool for annotation and utilized a Pose-based Convolutional Neural Network<cit.> (P-CNN) for gesture recognition. This approach resulted in a baseline accuracy of 91.9%. This dataset facilitates extensive research in gesture recognition, action recognition, human pose recognition, and UAV control, showcasing its efficacy and potential for real-world applications.
§.§ UAVDark135
In their research Bowen Li et al. presented the UAVDark135<cit.> dataset and the ADTrack algorithm. Their work aimed to tackle the challenge of achieving reliable tracking of unmanned aerial vehicles (UAVs) under different lighting conditions. UAVDark135 is the inaugural benchmark specifically developed for tracking objects during nighttime. It consists of more than 125,000 frames that have been manually annotated, addressing a deficiency in current benchmarks. The paper details the ADTrack algorithm, a discriminative correlation filter-based tracker with illumination adaptive and anti-dark capabilities, utilizing image illuminance information and an image enhancer for real-time, all-day tracking. ADTrack performs better in both bright and dark environments, as evidenced by extensive testing on benchmarks such as UAV123@10fps<cit.>, DTB70<cit.>, and UAVDark135—achieving over 30 FPS on a single CPU. While effective, the paper recommends broader comparisons with other state-of-the-art trackers and future research on image enhancement, multi-sensor fusion, and UAV hardware optimization.
§.§ DarkTrack2021
Junjie Ye et al. presented the DarkTrack2021<cit.> dataset to tackle the difficulty of tracking unmanned aerial vehicles (UAVs) in low-light situations. The dataset consists of 110 annotated sequences containing more than 100,000 frames, providing a varied evaluation platform for tracking UAVs during nighttime. The researchers created an effective low-light enhancer called the Spatial-Channel Transformer (SCT), which combines a spatial-channel Transformer with a robust non-linear curve projection model to effectively enhance low-light images. The Spatial-Channel Attention Module (SCT) employs a technique that effectively combines global and local information, resulting in enhanced image quality by reducing noise and improving illumination in nighttime scenes. This study utilizes the proposed ADTrack algorithm together with 16 state-of-the-art handmade correlation filter (CF)-based trackers to evaluate their performance on tracking benchmarks UAV123@10fps, DTB70, and UAVDark135. The aim is to demonstrate the comprehensive robustness of the proposed ADTrack algorithm in all-day UAV tracking. Evaluations conducted on the public UAVDark135 and the new DarkTrack2021 benchmarks demonstrated that SCT exhibited superior performance compared to existing methods in tracking UAVs during nighttime. The practicality of the approach has been confirmed through real-world tests. The DarkTrack2021 dataset and SCT code are openly accessible on GitHub for additional research and experimentation.
§.§ BioDrone
Xin Zhao et al. presented the BioDrone<cit.> dataset. BioDrone is a pioneering visual benchmark for Single Object Tracking<cit.> (SOT) that utilizes bionic drones. It specifically tackles the difficulties associated with tracking small targets that undergo significant changes in appearance, which are common in flapping-wing UAVs. The dataset consists of 600 videos containing 304,209 frames that have been manually labeled. Additionally, there are automatically generated labels for ten challenge attributes at the frame level. The study presents a new baseline method, UAV-KT, optimized from KeepTrack<cit.>, and evaluates 20 SOT models, ranging from traditional approaches like KCF<cit.> to sophisticated models combining CNNs and SNNs. The results of comprehensive experiments demonstrate that UAV-KT outperforms other methods in handling challenging vision tasks with resilience. The paper emphasizes BioDrone's potential for advancing SOT algorithms and encourages future research to address remaining challenges, such as camera shake and dynamic visual environments.
§ METHODOLOGY
The term UAV (Unmanned Aerial Vehicle) encompasses a diverse range of applications, requiring a thorough investigation to examine and define the extensive utilization of UAV datasets. We aimed to comprehend how these datasets can be employed in different research and project scenarios. To accomplish this, we implemented an exhaustive search for UAV datasets, initially narrowing our focus to the keyword "satellite or drone image datasets". The initial search led to the identification of "UAV datasets". After acknowledging the potential of UAV datasets, we conducted further research in this field, identifying their diverse applications in object detection, tracking, and event detection, as well as semantic segmentation and single object tracking.
To gather relevant UAV datasets, we conducted systematic searches on the Internet, employing a range of keywords and search terms related to UAVs and their applications. We specifically looked for datasets that showed off the adaptability of UAVs, choosing those that researchers had proposed and used in other research contexts. This approach ensured that the datasets we included were novel and provided diverse examples of UAV applications.
We identified and collected 15 UAV image datasets for inclusion in our study. Our selection criteria focused on datasets that showcased a variety of use cases, including traffic systems (car identification, person identification, and surveillance systems), damage classification from disasters, and other object detection and segmentation tasks. Each dataset was thoroughly reviewed and analyzed to understand its characteristics, intended use, and underlying methodologies.
Our analysis involved a detailed examination of the datasets, resulting in the comprehensive report included in this paper. This report outlines the behavior, agenda, and applications of each dataset, providing insights into their respective fields of use. By presenting these findings, we aim to highlight the versatility and potential of UAV datasets in advancing various research domains. Figure <ref> depicts the sequential process of our work.
§.§ Search Terms
We got the datasets we surveyed in this paper mostly from the website, <https://paperswithcode.com/>. Before we found this website we used various search terms to search for the UAV dataset and came across the website through the search process.
Example search strings:
* ("unmanned aerial vehicle" OR UAV OR drone OR Satellite) AND ("dataset" OR "image dataset" OR "dataset papers")
* (UAV OR "unmanned aerial vehicle") AND ("disaster dataset" OR "traffic surveillance")
These search strings and keywords facilitated a broad yet focused search, enabling us to gather a diverse set of UAV datasets that demonstrate their wide-ranging applications and research potential.
§ DATA DIVERSITY OF UAV
The advent of Unmanned Aerial Vehicles (UAVs) has opened new frontiers in data collection and analysis, transforming numerous fields with their versatile applications. The datasets generated by UAVs are diverse, encompassing various data types and serving multiple purposes. This section provides an overview of the various uses of UAV datasets, examines their diversity, and explores the methods applied to utilize these datasets in different studies.
§.§ Overview of UAV Dataset Uses
UAV datasets are pivotal in numerous domains, including disaster management, surveillance, agriculture, environmental monitoring, and human behavior analysis. The unique aerial perspectives provided by UAVs enable the collection of high-resolution imagery and videos, which can be used for mapping, monitoring, and analyzing different environments and activities.
§.§.§ Disaster Management
UAV datasets are often used to figure out how much damage hurricanes, earthquakes, and floods have done. High-resolution images and videos captured by UAVs allow for precise mapping of affected areas and the identification of damaged infrastructure.
§.§.§ Surveillance
In urban and rural settings, UAV datasets support advanced surveillance activities. They facilitate the monitoring of traffic, detection of illegal activities, and overall urban planning by providing real-time, high-resolution aerial views.
§.§.§ Agriculture
UAV datasets help in monitoring crop health, assessing irrigation needs, and detecting pest infestations. Multispectral and hyperspectral imaging from UAVs enable detailed analysis of vegetation indices and soil properties.
§.§.§ Environmental Monitoring
UAVs are used to monitor forest health, wildlife, and water bodies. They provide data for studying ecological changes, tracking animal movements, and assessing the impacts of climate change.
§.§.§ Human Behavior Analysis
UAV datasets contribute to analyzing human activities and behaviors in public spaces. They are used for action recognition, pose estimation, and crowd monitoring, offering valuable insights for security and urban planning.
§.§ Variability of UAV databases
The diversity of UAV datasets lies in their varied data types, capture conditions, and application contexts. This diversity ensures that UAVs can address a wide range of tasks, each requiring specific data characteristics.
§.§.§ Data Types
UAV datasets include RGB images, infrared images, depth maps, and multispectral and hyperspectral images<cit.>. To capture complex scenarios for human behavior analysis, the UAV-Human dataset, for example, combines RGB videos, depth maps, infrared sequences, and skeleton data.
§.§.§ Capture Conditions
A variety of conditions, such as different times of day, weather, light (low light or varied lumination), and flight altitudes, are encountered when gathering UAV datasets. This variety makes sure that models that were trained on these datasets are strong and work well in a variety of settings.
§.§.§ Application Contexts
UAV datasets are tailored for specific applications. For example, visualizing data, object annotations, and flight data are used to address specific problems that come up when monitoring traffic from the air. Furthermore, the application of high-resolution images of the damage taken after the disaster, which enable accurate assessment of the damage.
§.§ Methods Applied to the UAV Dataset
Various methods are applied to UAV datasets to extract valuable insights and solve specific problems. These methods include machine learning, computer vision techniques, and advanced data processing algorithms. In Table <ref> and <ref>, an overview of the methods used and the analysis of results are given to gain a better understanding.
§.§.§ Machine Learning and Deep Learning
Deep learning models, such as convolutional neural networks (CNNs)<cit.>, are widely used for tasks like object detection, segmentation, and classification. For example:
* The RescueNet dataset employs models like PSPNet, DeepLabv3+, and Attention UNet for semantic segmentation to assess disaster damage.
* The UAVid Dataset presents deep learning baseline methods like Multi-Scale-Dilation net. The ERA dataset establishes a benchmark for event recognition in aerial videos by utilizing pre-existing deep learning models like the VGG models (VGG-16, VGG19)<cit.>, Inception-v3<cit.>, the ResNet models (ResNet-50, ResNet-101, and ResNet-152)<cit.>, MobileNet, the DenseNet models (DenseNet-121, DenseNet-169, DenseNet-201)<cit.>, and NASNet-L<cit.>.
In the domain of deep learning, ensemble methods play a crucial role. They not only assess model performance but also boost accuracy while keeping the model’s equilibrium intact. Such as:
* In VRAI dataset, they utilized ensemble techniques such as Triplet Loss, Contrastive Loss, ID Classification Loss, and Triplet + ID Loss, and introduced multi-task and multi-task + discriminative parts. These ensemble methods performed better than the state-of-the-art methods in their claim.
§.§.§ Transfer Learning
Transfer learning is used to leverage pre-trained models on UAV datasets, allowing for quicker and more efficient training. Like,
* Pre-trained YOLOv3-Tiny and MobileNetv2-SSDLite models, for example, are used for real-time object detection in the AU-AIR<cit.> dataset.
§.§.§ Event Recognition
Unmanned Aerial Vehicles (UAVs) have proven to be highly proficient in the field of event recognition and have gained significant popularity in this domain. Like for example:
* The ERA dataset has been subjected to various methods for event recognition in aerial videos, including DenseNet-201 and Inception-v3. These methods have demonstrated notable accuracy in identifying dynamic events from UAV footage.
* The BioDrone dataset assesses single object tracking (SOT) models and investigates new optimization approaches for the cutting-edge KeepTrack method for robust vision, which is presented by flapping-wing unmanned aerial vehicles<cit.>.
§.§.§ Multimodal Analysis
Combining data from multiple sensors enhances the analysis capabilities of UAV datasets. The multimodal approach of the UAV-Human dataset, which combines RGB, infrared, and depth data, makes a thorough analysis of human behavior possible.
§.§.§ Creative Algorithms
New algorithms are created to tackle particular problems in the analysis of data from unmanned aerial vehicles. For example:
* The UAV-Gesture<cit.> dataset employs advanced gesture recognition algorithms to enable UAV navigation and control based on human gestures.
* The UAVDark135<cit.> makes use of ADTrack, a tracker that adapts to varying lighting conditions and makes use of discriminative correlation filters. It also has anti-dark capabilities.
* To address the issue of fisheye video distortions, the authors of the UAV-Human<cit.> dataset suggest a fisheye-based action recognition method that uses flat RGB videos as guidance.
* To classify disaster events from an unmanned aerial vehicle (UAV), the authors of the AIDER<cit.> dataset have created a lightweight convolutional neural network (CNN) architecture that they have named ERNet.
* VERI-Wild<cit.> introduces FDA-Net, a novel method for vehicle identification. It includes an embedding discriminator and a feature distance adversary network to enhance the model's capacity to differentiate between various automobiles.
§.§.§ Managing Diverse Conditions
Various environmental conditions, such as different lighting, weather, and occlusions, present challenges that are often addressed by methodologies. Like, DarkTrack2021 used the low-light enhancer-based method SCT to handle performance in low-light conditions.
The diversity of UAV datasets is a cornerstone of their utility, enabling a wide array of applications across different fields. From disaster management to human behavior analysis, the rich variety of data types, capture conditions, and application contexts ensures that UAV datasets can meet the specific needs of each task. The application of advanced methods, including deep learning, transfer learning, and multimodal analysis, further enhances the value derived from these datasets, pushing the boundaries of what UAVs can achieve in research and practical applications.
§ THE POTENTIAL OF COMPUTER VISION RESEARCH IN UAV DATASETS
Unmanned Aerial Vehicles (UAVs) have greatly expanded the fields of computer vision research. UAV datasets offer unique and flexible data that is used in a range of computer vision tasks, from recognizing actions to finding objects. This section explores how UAV datasets are advancing computer vision research, contributing to various tasks from action recognition to object detection, as illustrated in Figure <ref>, which highlights the diverse applications and the development of new methods centered around these datasets.
§.§ Leveraging UAV Datasets for Computer Vision Applications
Human behavior analysis, emergency response, tracking at night, surveillance, and many other uses can be done with UAV datasets in computer vision. These are some of the areas where UAV datasets are used, along with an example of how to describe a dataset based on the datasets we talked about in our research paper.
§.§.§ Human Behavior Understanding and Gesture Recognition
The UAV-Human platform is essential for utilizing UAVs to study human behavior, including a range of conditions and perspectives for pose estimation and action recognition. This dataset contains multi-modal information, including skeleton, RGB, infrared, and night vision modalities.
Essential for UAV control and gesture identification, UAV-Gesture contains 119 high-definition video clips with 13 gestures for command and navigation that are marked with body joints and gesture classes. Because this dataset was captured outside, it has more practical UAV control applications because of the variations in phase, orientation, and body shape.
§.§.§ Emergency Response and Disaster Management
RescueNet provides detailed pixel-level annotations and high-resolution images for 10 classes, including buildings, roads, pools, and trees. It is designed for post-disaster damage assessment using UAV imagery. It supports semantic segmentation using state-of-the-art models, enhancing natural disaster response and recovery strategies.
AIDER focuses on classifying disaster events, utilizing images of traffic accidents, building collapses, fires, and floods to support real-time disaster management applications by training convolutional neural networks (CNNs).
§.§.§ Traffic Surveillance and Vehicle Re-Identification
In traffic surveillance, AU-AIR prioritizes real-time performance and offers annotations for a variety of object categories, including cars, buses, and pedestrians. It bridges the gap between computer vision and robotics by offering multi-modal sensor data for advanced research in data fusion applications.
VRAI is the largest UAV-based vehicle re-identification dataset, containing over 137,613 images of 13,022 vehicles with annotations for unique IDs, color, vehicle type, attributes, and discriminative parts. It supports vehicle ReID tasks with diverse scenarios and advanced algorithms.
VERI-Wild, which contains over 400,000 photos of 40,000 vehicles taken by 174 CCTV cameras in various urban settings, is essential for research on vehicle re-identification. It uses techniques like FDA-Net to improve ReID accuracy by addressing variations in backgrounds, illumination, occlusion, and viewpoints.
§.§.§ Event Recognition and Video Understanding
For training models in event recognition in UAV videos, ERA contains 2,864 labeled video snippets for 24 event classes and 1 normal class that were gathered from YouTube. This dataset captures dynamic events in various conditions, supporting temporal event localization and video retrieval tasks.
§.§.§ Nighttime tracking and low-light conditions
Including 110 annotated sequences with over 100,000 frames, DarkTrack2021 is crucial for improving UAV tracking at night. By employing spatial-channel transformers (SCT) and non-linear curve projection models, it improves the quality of low-light images and offers a thorough assessment framework.
The UAVDark135 dataset and the ADTrack algorithm are designed for all-day aerial tracking. ADTrack performs well in low light and adjusts to various lighting conditions thanks to its discriminative correlation filter foundation. More than 125,000 frames, specially annotated for low-light tracking scenarios, are included in the UAVDark135 dataset.
§.§.§ Object Tracking and Robust Vision
With 600 videos and 304,209 manually labeled frames, BioDrone is a benchmark for single object tracking with bionic drones. It captures challenges such as camera shake and drastic appearance changes, supporting robust vision analyses and evaluations of various single object tracking algorithms.
§.§.§ Urban Scene Segmentation and Forestry Analysis
UAVid provides annotations for eight classes and 30 high-resolution video sequences in 4K resolution to address segmentation challenges in urban scenes. It uses models such as Multi-Scale-Dilation net to support tasks like population density analysis and traffic monitoring.
FOR-instance provides UAV-based laser scanning data for tree instance segmentation and is intended for use in point cloud segmentation in forestry. It facilitates benchmarking and method development by supporting both instance and semantic segmentation.
§.§.§ Multimodal Data Synthesis and UAV Control
UAV-Assistant facilitates monocular pose estimation by introducing a multimodal dataset featuring exocentric and egocentric views. It enhances 3D pose estimation tasks with novel smooth silhouette loss function and differentiable rendering techniques.
KITE incorporates spoken commands, audio, and images to enhance UAV control systems. It includes commands recorded by 16 speakers, supporting movement, camera-related, and scenario-specific commands with multi-modal approaches.
Together, these datasets improve a wide range of computer vision applications, including robust vision in difficult conditions, real-time traffic surveillance, emergency response, and human behavior analysis.
§.§ Development of Novel Methods Using UAV Datasets
UAV datasets have spurred the development of innovative methods in computer vision. As an example, the Guided Transformer I3D framework, which addresses distortions through unbounded transformations guided by flat RGB videos, was developed using the UAV-Human dataset. This framework enhances action recognition performance in fisheye videos. This approach is a prime example of how UAV datasets drive the creation of specialized algorithms to address particular difficulties brought about by aerial viewpoints.
The DarkTrack2021 benchmark introduces a Spatial-Channel Transformer (SCT) for enhancing low-light images in nighttime UAV tracking. Meanwhile, Bowen Li and team present the UAVDark135 dataset and the ADTrack algorithm for all-day aerial object tracking. ADTrack, equipped with adaptive illumination and anti-dark capabilities, outperforms other trackers in both well-lit and dark conditions. It processes over 30 frames per second on a single CPU, ensuring efficient tracking under various lighting conditions. The study emphasizes how crucial image illuminance data is and suggests a useful image enhancer to improve tracking performance in all-day situations.
For emergency response applications, the AIDER dataset has facilitated the development of ERNet, a lightweight CNN architecture optimized for embedded platforms. ERNet's architecture, which incorporates downsampling at an early stage and efficient convolutional layers, allows for real-time classification of aerial images on low-power devices. This showcases the practical use of UAV datasets in disaster management.
The VERI-Wild dataset introduces a novel approach called FDA-Net for vehicle reidentification. This method utilizes a unique type of network to generate difficult negative examples in the feature space. On the other hand, the VRAI dataset has developed a specialized vehicle ReID algorithm that leverages detailed annotation information to explicitly identify unique parts for each vehicle instance in object detection.
Ultimately, UAV datasets are essential in the field of computer vision research, providing distinct data that is invaluable for a diverse array of applications. They allow for the development of novel methods tailored to the specific challenges and opportunities presented by UAV technology, accelerating progress in areas such as human behavior analysis, emergency response, and nighttime tracking.
§ CONSTRAINTS OF UAVS
While Unmanned Aerial Vehicles (UAVs) have significantly advanced data collection and analysis in numerous fields, they are not without limitations, particularly concerning the datasets they generate. This section delves into the primary constraints associated with UAV datasets, emphasizing their impact on the field and suggesting areas for improvement.
§.§ Data Quality and Consistency
One of the most pressing limitations of UAV datasets is the inconsistency in data quality. Weather, time of day, and UAV stability are just a few variables that can affect the quality of data that UAVs collect. Such as, datasets collected during poor weather conditions or at night may need more visibility and increased noise, complicating subsequent analysis and model training. Even with advancements like low-light image enhancers and specialized algorithms for nighttime tracking, these solutions often need improvement and require further refinement to match the reliability of daytime data.
§.§ Limited Scope and Diversity
UAV datasets often need more diversity in terms of geographic locations, environmental conditions, and the variety of captured objects. Many existing datasets, such as AU-AIR and ERA, focus heavily on specific scenarios like urban traffic surveillance or disaster response, which limits their generalizability to other contexts. Additionally, datasets such as UAV-Human and UAVDark135 tend to feature limited subject diversity and controlled environments, which may not accurately represent real-world conditions. This lack of diversity can lead to models that perform well in specific conditions but struggle in untested environments.
§.§ Annotation Challenges
The process of annotating UAV datasets is often time-consuming and labor-intensive. High-resolution images and videos captured by UAVs require detailed, pixel-level annotations, which are essential for tasks like semantic segmentation and object detection. This is clearly seen in datasets such as RescueNet and FOR-Instance, where the annotation process is recognized as a major bottleneck. The intensive labor required for comprehensive annotation limits the availability of large, well-labeled datasets, which are crucial for training robust machine learning models.
§.§ Computational and Storage Demands
The high resolution and large volume of data generated by UAVs pose significant computational and storage challenges. Processing and analyzing large-scale UAV datasets demand substantial computational resources and advanced hardware, which may only be readily available to some researchers. For example, the dense and high-resolution images in datasets like UAVid and BioDrone require extensive processing power for effective utilization. Additionally, the storage of such vast amounts of data can be impractical for some institutions, hindering widespread access and collaboration.
§.§ Integration with Other Data Sources
Another limitation is the integration of UAV datasets with other data sources. While multimodal datasets that combine UAV data with other sensor inputs (such as satellite imagery, GPS data, and environmental sensors) provide richer insights, they also introduce complexity in data alignment and fusion. The AU-AIR dataset, which includes visual data along with GPS coordinates and IMU data, exemplifies the potential and challenges of such integration. Ensuring the synchronized and accurate fusion of data from multiple sources remains a technical hurdle that needs addressing.
§.§ Real-Time Data Processing
The ability to process and analyze UAV data in real-time is critical for applications like disaster response and surveillance. However, achieving real-time processing with high accuracy is challenging due to the aforementioned computational demands. Models such as those evaluated in the DarkTrack2021 and UAVDark135 datasets show promise but often require optimization to balance speed and accuracy effectively. Real-time processing also necessitates robust algorithms capable of handling dynamic environments and changing conditions without significant delays.
§.§ Ethical and Legal Considerations
Finally, the use of UAVs and their datasets is subject to various ethical and legal considerations. Issues such as privacy, data security, and regulatory compliance must be addressed to ensure responsible and lawful use of UAV technology. These considerations can limit the scope of data collection and usage, particularly in populated areas or sensitive environments, thereby constraining the availability and applicability of UAV datasets.
Despite the transformative potential of UAV datasets across various disciplines, their limitations must be acknowledged and addressed to maximize their utility. Improving data quality, enhancing dataset diversity, streamlining annotation processes, and overcoming computational and storage challenges are essential steps. Additionally, integrating UAV data with other sources, advancing real-time processing capabilities, and adhering to ethical and legal standards will ensure that UAV datasets can be effectively leveraged for future research and applications. By tackling these limitations, the field can fully harness the power of UAV technology to drive innovation and deepen our understanding of complex, dynamic environments from an aerial perspective.
§ PROSPECTS FOR FUTURE UAV RESEARCH
Future studies on UAV datasets need to focus on a few crucial areas to improve their usefulness and cross-domain applicability as the field grows. The following suggestions highlight the crucial paths for creating UAV datasets and maximizing their potential for future innovations.
§.§ Enhancing Dataset Diversity and Representativeness
Further investigations ought to concentrate on generating more varied and representative UAV datasets. This involves capturing data in a wider range of environments, weather conditions, and geographic locations to ensure models trained on these datasets are robust and generalizable. To obtain comprehensive data for tasks like environmental monitoring, urban planning, and disaster response, datasets can be expanded to include a variety of urban, rural, and natural settings.
§.§ Incorporating Multimodal Data Integration
Integrating multiple data modalities, such as thermal, infrared, LiDAR<cit.>, and hyperspectral<cit.> imagery, can significantly enrich UAV datasets. In the future, these data types should be combined to create multimodal datasets that provide a more comprehensive view of the scenes that were recorded. This integration can improve the accuracy of applications such as vegetation analysis, search and rescue operations, and wildlife monitoring.
§.§ Advancing Real-Time Data Processing and Transmission
For applications like emergency response and traffic monitoring that demand quick analysis and decision-making, developing techniques for real-time data processing and transmission is essential. Future research should focus on optimizing data compression, transmission protocols, and edge computing techniques to enable swift and efficient data handling directly on UAVs.
§.§ Improving Annotation Quality and Efficiency
High-quality annotations are vital for the effectiveness of UAV datasets in training machine learning models. Future studies should investigate automated and semi-automated annotation tools that leverage AI to reduce manual labor and improve annotation accuracy. Additionally, crowdsourcing and collaborative platforms can be utilized to gather diverse annotations, further enhancing dataset quality.
§.§ Addressing Ethical and Privacy Concerns
As UAVs become more prevalent, addressing ethical and privacy issues becomes increasingly important. Guidelines and frameworks for the ethical use of UAV data should be established by future research, especially for applications involving surveillance and monitoring. It is important to focus on creating methods that protect privacy and collect data in a way that respects regulations and earns the trust of the public.
§.§ Expanding Application-Specific Datasets
The creation of customized datasets for specific uses can effectively boost new ideas in certain areas. For instance, datasets focused on agricultural monitoring, wildlife tracking, or infrastructure inspection can provide domain-specific insights and improve the precision of related models. To address the specific needs of various industries, future research should give priority to developing such targeted datasets.
§.§ Enhancing Interoperability and Standardization
Standardizing data formats and annotation protocols across UAV datasets can make it easier for researchers and developers to use and make the datasets more interoperable. Future efforts should aim to establish common standards and benchmarks, enabling the seamless integration of datasets from various sources and promoting collaborative research efforts.
§.§ Utilizing Advanced Machine Learning Techniques
The application of cutting-edge machine learning techniques, such as deep learning and reinforcement learning, to UAV datasets holds immense potential for advancing UAV capabilities. Future research should explore innovative algorithms and models that can leverage the rich data provided by UAVs to achieve breakthroughs in areas like autonomous navigation, object detection, and environmental monitoring.
§.§ Leveraging Advanced Machine Learning Techniques
Longitudinal studies that collect UAV data over long periods of time can give us useful information about how things change over time in different settings. Future research should emphasize continuous data collection efforts to monitor changes in ecosystems, urban developments, and disaster-prone areas, enabling more informed and proactive decision-making.
§.§ Fostering Collaborative Research and Open Data Initiatives
Encouraging collaboration among researchers, institutions, and industries can accelerate advancements in UAV datasets. Open data initiatives that make UAV datasets public should be supported by future research. These initiatives will encourage innovation and allow a wider range of researchers to contribute to and use these resources.
By addressing these future research directions, the field of UAV datasets can continue to evolve, offering increasingly sophisticated tools and insights that drive progress across multiple domains. UAV datasets are still being improved and added to, which is very important for getting the most out of UAV technology and making room for new discoveries and uses.
§ RESULTS AND DISCUSSION OF REVIEWED PAPERS
The datasets discussed in this section represent the application of the papers reviewed in this survey. Our analysis of the datasets revealed that KITE, RescueNet, and Biodrone are relatively new and have not been thoroughly investigated in the literature. While one of the datasets we reviewed, ERA, is not very recent, it still lacks the enough amount of study to fully emphasize its potential. The datasets included in our review were selected based on the number of citations their associated papers have received, emphasizing those with higher citation counts. We delved into several papers that make compelling use of the datasets we evaluated. In our examination, we carefully reviewed the details of the analysis of results and experiments conducted by other researchers. These researchers utilized the datasets we assessed as benchmarks and applied various methods. We have included the best results for the methods applied to the datasets we reviewed in this section and in Table <ref>, <ref> and <ref>.
§.§ AU-AIR
In their study, Jiahui et al.<cit.> selected AU-AIR as a benchmark dataset to create their proposed real-time object detection model, RSSD-TA-LATM-GID, specifically designed for small-scale object detection. The performance of their model surpassed that of YOLOv4<cit.> and YOLOv3<cit.>. The researchers employed the MobileNetv-SSDLite ensemble approach, which yielded the lowest mean average precision (mAP) score.
Walambe et al.<cit.> employed baseline models on the AU-AIR dataset as one of their evaluative benchmarks. The objective of the study was to demonstrate the attainability of different techniques and ensemble techniques in the detection of objects with varying scales. The baseline technique yielded the highest performance, with a mean average precision (mAP) score of 6.63%. This outcome was achieved by employing color-augmentation on the dataset. The performance metrics for the ensemble methods YOLO+RetinaNet and RetinaNet+SSD were found to be 3.69% and 4.03%, respectively. The authors Saeed et al.<cit.> made modifications to the architecture of the CenterNet model by using other Convolutional Neural Networks (CNNs) as backbones, such as resnet18, hourglass-104, resnet101, and res2net101. Among all the CNNs as backbone. The findings are presented in Table <ref>.
Gupta and Verma in their paper <cit.> utilized the AU-AIR data as a reference point, employing a range of advanced models to achieve precise and automated detection and classification of road traffic. The YOLOv4 model achieved the highest mean average precision (mAP) score of 25.94% on the AU-AIR dataset. The Faster R-CNN and YOLOv3 models achieved the second and third highest maximum average precision (mAP) scores, with values of 13.77% and 13.33% respectively.
§.§ FOR-instance
Bountos et. al. extensively utilized the "FOR-Instance" dataset in their study, <cit.>, while introducing their innovative approach FoMo-Net. The dataset was utilized to analyze point cloud representations obtained from LiDAR sensors in order to gain a deeper understanding of tree geometry. Existing baseline techniques such as PointNet, PointNet++, and Point Transformer were employed to accomplish these objectives on aerial modality. The corresponding findings are presented in Table <ref>. In a separate paper, Zhang et. al.<cit.> used the "FOR-instance" dataset to train their proposed HFC algorithm and compare its performance with other established approaches. The authors utilized several techniques and ensemble approaches (Xing2023, HFC+Xing2023, HFC+Mean Shift, HFC) on several forest types (CULS, NIBIO, NIBIO2, SCION, RMIT, TUWIEN) shown in the FOR-instance dataset. Among all the methods, HFC demonstrated superior performance. The optimal outcomes achieved by the HFC approach on various forest types represented in the FOR-instance dataset are presented in Table <ref>.
§.§ UAV-Assistant
Albanis et al. used the UAV-Assistant dataset as benchmark for their research, <cit.>. They conducted a comparative analysis of BPnP<cit.> and HigherHRNet's<cit.> 6DOF object pose estimation using several different criteria. Analysis revealed that loss functions play a crucial role in posture estimation. Specifically, the l_p loss function outperformed the l_h loss function, particularly in the case of the M2ED drone, resulting in improved accuracy metrics. HigherHRNet demonstrated greater performance compared to HRNet<cit.> on smaller objects such as the Tello drone, but not on the M2ED drone, indicating its potential superiority under smaller object classifications. Their analysis of qualitative heatmaps revealed that the l_p loss function performed better than the Gaussian-distributed l_h model in accurately locating keypoints. Table <ref> displays the accuracy metrics (ACC2 and ACC5) obtained from the research conducted by Albanis and his colleagues. In the case of BPnP, we have included the accuracy for both M2ED and Tello drones respectively, as they achieved the highest accuracy outcomes. Regarding HRNet and HigherHRNet, they achieved the best accuracy specifically for M2ED.
§.§ AIDER
The AIDER dataset has been utilized as a benchmark by Alrayes et al. and the authors of AIDER in developing their innovative method, "EmergencyNet." In their paper, <cit.> various pre-trained models were applied to the AIDER dataset, with the best F1 accuracy achieved using VGG16 (96.4%) and ResNet50 (96.1%). However, the memory consumption for VGG16 and ResNet50 was quite high, at 59.39MB and 96.4MB respectively. However, EmergencyNet achieved 95.7% F1 accuracy with only 0.368MB of RAM. ResNet50 had nearly 24 million parameters, while VGG16 had 14.8 million. Alrayes et al. benchmarked their AISCC-DE2MS model with AIDER. They found that their algorithm outperformed the genetic, cat-swarm, and artificial bee colony algorithms. MSE and PSNR were utilized to evaluate. These methods were used to compare five photos to evaluate the model's performance. The best result from the five photos is shown in Table <ref>.
§.§ DarkTrack2021
Changhong Fu and his team utilized the DarkTrack2021 benchmark as a foundation for developing the Segment Anything Model (SMA) powered framework SAM-DA. Their research <cit.> focused on effectively addressing illumination variation and low ambient intensity. They conducted a comparative analysis between their model and various methods, particularly the Baseline tracker UDAT<cit.> method. Their novel approach outperformed the Baseline UDAT method, achieving substantial improvements of 7.1% in illumination variation and 7.8% in low ambient intensity. The authors evaluated 15 state-of-the-art trackers and found that SAM-DA demonstrated the most promising results. Additionally, Changhong Fu delved into Siamese Object Tracking in their another study <cit.>, highlighting the significance of UAVs in visual object tracking. They also leveraged the DarkTrack2021 datasets as a benchmark to assess model performance in low-illumination conditions, with detailed results and the applied models presented in Table <ref>.
§.§ UAV-Human
Azmat et al.<cit.> address UAV-captured data-based human action recognition (HAR) challenges and approaches in their UAV-Human dataset research. Azmat et al. evaluated their HAR system on 67,428 video sequences of 119 people in various contexts from the UAV-Human dataset. The approach has a mean accuracy of 48.60% across eight action classes, indicating that backdrops, occlusions, and camera motion hinder human movement recognition in this dataset. Lin et al.<cit.> studied text bag filtering techniques for model training, emphasizing data quality. Their ablation study indicated that text bag filtering ratio influences CLIP matching accuracy and zero-shot transfer performance. Filtering training data improved model generalization, especially in unsupervised learning. Huang et al.<cit.> evaluated the 4s-MS&TA-HGCN-FC skeleton-based action recognition model on the UAV-Human dataset. The model achieved 45.72% accuracy on the CSv1 benchmark and 71.84% on the CSv2 test, surpassing previous state-of-the-art techniques. They found that their technique can manage UAV-captured data's viewpoints, motion blurring, and resolution changes.
§.§ UAVDark135
Zhu et al.<cit.> and Ye et al.<cit.> used the UAVDark135 dataset to evaluate their strategies for increasing low-light tracking performance. The Darkness Clue-Prompted Tracking (DCPT) approach by Zhu et al. showed considerable gains, reaching a 57.51% success rate on UAVDark135. A 1.95% improvement over the base tracker demonstrates the effectiveness of including darkness clues. Additionally, DCPT's gated feature aggregation approach increased success score by 2.67%, making it a reliable nighttime UAV tracking system. Ye et al.'s DarkLighter(DL) approach improved tracking performance on the UAVDark135 dataset. DL improved SimpAPN<cit.><cit.> tracker AUC by over 29% and precision by 21%. It also worked well across tracking backbones, enhancing precision and success rates in light variation, quick motion, and low resolution circumstances. DL surpassed modern low-light enhancers like LIME by 1.68% in success rate and 1.45% in precision.
§.§ VRAI
VRAI was utilized to establish a vehicle re-identification baseline. Syeda Nyma Ferdous, Xin Li, and Siwei Lyu <cit.> tested their uncertainty-aware multitask learning framework on this dataset and achieved 84.47% Rank-1 accuracy and 82.86% mAP. This model's capacity to handle aerial image size and position fluctuations was greatly improved by multiscale feature representation and a Pyramid Vision Transformer (PVT) architecture.
Shuoyi Chen, Mang Ye, and Bo Du<cit.> focused on vehicle ReID using VRAI. RotTrans, a rotation-invariant vision transformer, surpassed current innovative approaches by 3.5% in Rank-1 accuracy and 6.2% in mean average precision (mAP). This approach solved UAV-based vehicle ReID challenges that typical pedestrian ReID methods struggle with. The process was further complicated by the need to present results in a certain format for performance evaluation.
§.§ UAV-Gesture
Usman Azmat et al.<cit.> and Papaioannidis et al.<cit.> utilized the UAV-Gesture dataset to evaluate their recommendations for human action and gesture recognition. They used the UAV-Gesture collection of 119 high-definition RGB movies representing 13 unique motions used to control UAVs. The dataset is ideal for testing recognition systems due to its diversity of views and movement similarities. The Usman Azmat et al. method achieved 0.95 action recognition accuracy on the UAV-Gesture dataset. Mean precision, recall, and F1-score for the system were 0.96, 0.95, and 0.94. Several investigations supported by confusion matrices showed the system's ability to distinguish gestures. Papaioannidis et al. found that their gesture recognition method outperformed DD-Net<cit.> and P-CNN<cit.> by 3.5% in accuracy. The authors stressed the need of using 2D skeletal data from movies to boost recognition accuracy. Real-time performance makes their method suitable for embedded AI hardware in dynamic UAV situations.
§.§ UAVid
The UAVid dataset has been extensively utilized as a benchmark by several researchers in the development of innovative methods for semantic segmentation in urban environments. Wang et al.<cit.> introduced the Bilateral Awareness Network (BANet) and applied it to the UAVid dataset, achieving a notable mean Intersection-over-Union (mIoU) score of 64.6%. BANet's ability to accurately segment various classes within high-resolution urban scenes was demonstrated through both quantitative metrics and qualitative analysis, outperforming other state-of-the-art models like the MSD benchmark.
Similarly, Rui Li et al.<cit.> proposed the Attention Aggregation Feature Pyramid Network (A²-FPN) and reported significant improvements on the UAVid dataset. A²-FPN achieved the highest mIoU across five out of eight classes, surpassing BANet by 1% in overall performance. The model's effectiveness was particularly evident in its ability to correctly identify moving vehicles, a challenging task for many segmentation models.
Libo Wang et al.<cit.> introduced the UNetFormer, which further pushed the boundaries of semantic segmentation on the UAVid dataset. Achieving an impressive mIoU of 67.8%, the UNetFormer outperformed several advanced networks, including ABCNet<cit.> and hybrid Transformer-based models like BANet and BoTNet<cit.>. The UNetFormer demonstrated a strong ability to handle complex segmentation tasks, particularly in accurately identifying small objects like humans.
Lastly, Michael Ying Yang et al.<cit.> applied the Context Aggregation Network(CAN) to the UAVid dataset, achieving a mIoU score of 63.5% while maintaining a high processing speed of 15 frames per second (FPS). This model was noted for its ability to maintain consistency in both local and global scene semantics, making it a competitive choice for real-time applications in urban environments.
§.§ VERI-Wild
The VERI-Wild dataset has been extensively utilized as a benchmark by several researchers in the development of innovative methods for vehicle re-identification (ReID) in real-world scenarios. Meng et al.<cit.> introduced the Parsing-based View-aware Embedding Network (PVEN) and applied it to the VERI-Wild dataset, achieving significant improvements in mean Average Precision (mAP) across small, medium, and large test datasets, with increases of 47.4%, 47.2%, and 46.9%, respectively. PVEN’s ability to perform view-aware feature alignment allowed it to consistently outperform state-of-the-art models, particularly in Cumulative Match Characteristic (CMC) metrics, where it showed a 32.7% improvement over FDA-Net at rank 1.
Similarly, Lingxiao He et al.<cit.> evaluated the FastReID toolbox on the VERI-Wild dataset, highlighting its effectiveness in accurately identifying vehicles across various conditions. FastReID achieved state-of-the-art performance, particularly in Rank-1 accuracy(R1-Accuracy) and mAP, showcasing its robustness in handling the complexities of vehicle ReID tasks in surveillance and traffic monitoring environments.
Fei Shen et al.<cit.> applied the GiT method on the VeRi-Wild dataset, securing top performance across all testing subsets, including Test3000(T3000), Test5000(T5000), and Test1000(T1000). The GiT method outperformed the second-place method, PCRNet, by 0.41% in Rank-1 identification rate and 0.45% in mAP on the Test1000 subset. The study emphasized the importance of leveraging both global and local features, as GiT demonstrated superior generalization across different datasets and conditions. In a separate study, Fei Shen et al.<cit.> developed the Hybrid Pyramidal Graph Network (HPGN) approach, which achieved the highest Rank-1 identification rate among the evaluated methods on the VERI-Wild dataset, so making more contributions to the advancing field of vehicle ReID. The findings highlighted the resilience of HPGN, especially in difficult circumstances such as fluctuating day and night situations, where alternative approaches exhibited a decrease in effectiveness.
Lastly, Khorramshahi et al.<cit.> presented a residual generation model that improved mAP by 2.0% and CMC1 by 1.0% compared to baseline models. The model's reliance on residual information, as indicated by a high alpha value (α = 0.94) which proved crucial in extracting robust features from the dataset. This self-supervised method further proved its adaptability and usefulness in vehicle ReID tasks by showcasing its efficacy on the VERI-Wild dataset.
§ CONCLUSION
In this survey paper, we looked at the current state of UAV datasets, highlighting their various applications, inherent challenges, and future directions. UAV datasets are essential in areas such as disaster management, surveillance, agriculture, environmental monitoring, and human behavior analysis. Advanced machine learning techniques have improved UAV capabilities, enabling more precise data collection and analysis.
Despite their potential, UAV datasets face several challenges, including data quality, consistency, and the need for standardized annotation protocols. Ethical and privacy concerns necessitate strong frameworks to ensure responsible use.
Future research should increase dataset diversity, integrate multimodal data, and improve real-time data processing. Improving annotation quality and promoting collaborative research and open data initiatives will increase dataset utility.
To summarize, UAV datasets are at a critical stage of development, with significant opportunities for technological advancements. Addressing current challenges and focusing on future research directions will result in new discoveries, keeping UAV technology innovative and practical.
unsrt
100
b16
Syed Agha Hassnain Mohsan, Muhammad Asghar Khan, Fazal Noor, Insaf Ullah, and Mohammed H Alsharif.
Towards the unmanned aerial vehicles (uavs): A comprehensive review.
Drones, 6(6):147, 2022.
b18
Kien Nguyen, Clinton Fookes, Sridha Sridharan, Yingli Tian, Feng Liu, Xiaoming Liu, and Arun Ross.
The state of aerial surveillance: A survey.
arXiv preprint arXiv:2201.03080, 2022.
b21
Lianghua Huang, Xin Zhao, and Kaiqi Huang.
Got-10k: A large high-diversity benchmark for generic object tracking in the wild.
IEEE transactions on pattern analysis and machine intelligence, 43(5):1562–1577, 2019.
b22
Shiyu Hu, Xin Zhao, Lianghua Huang, and Kaiqi Huang.
Global instance tracking: Locating target more like humans.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):576–592, 2022.
b1
Maryam Rahnemoonfar, Tashnim Chowdhury, and Robin Murphy.
Rescuenet: a high resolution uav semantic segmentation dataset for natural disaster damage assessment.
Scientific data, 10(1):913, 2023.
b19
Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven McDonagh, Nils Y Hammerla, Bernhard Kainz, et al.
Attention u-net: Learning where to look for the pancreas.
arXiv preprint arXiv:1804.03999, 2018.
b20
Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia.
Pyramid scene parsing network.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2881–2890, 2017.
b23
Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam.
Encoder-decoder with atrous separable convolution for semantic image segmentation.
In Proceedings of the European conference on computer vision (ECCV), pages 801–818, 2018.
b17
Maryam Rahnemoonfar, Tashnim Chowdhury, Argho Sarkar, Debvrat Varshney, Masoud Yari, and Robin Roberson Murphy.
Floodnet: A high resolution aerial imagery dataset for post flood scene understanding.
IEEE Access, 9:89644–89654, 2021.
b2
Tianjiao Li, Jun Liu, Wei Zhang, Yun Ni, Wenqian Wang, and Zhiheng Li.
Uav-human: A large benchmark for human behavior understanding with unmanned aerial vehicles.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16266–16275, 2021.
b24
Bowen Cheng, Bin Xiao, Jingdong Wang, Honghui Shi, Thomas S Huang, and Lei Zhang.
Higherhrnet: Scale-aware representation learning for bottom-up human pose estimation.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5386–5395, 2020.
b25
Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, and Cewu Lu.
Rmpe: Regional multi-person pose estimation.
In Proceedings of the IEEE international conference on computer vision, pages 2334–2343, 2017.
b26
Veerachart Srisamosorn, Noriaki Kuwahara, Atsushi Yamashita, Taiki Ogata, Shouhei Shirafuji, and Jun Ota.
Human position and head direction tracking in fisheye camera using randomized ferns and fisheye histograms of oriented gradients.
The Visual Computer, 36(7):1443–1456, 2020.
b27
Konstantinos K Delibasis, Vassilis P Plagianakos, and Ilias Maglogiannis.
Pose recognition in indoor environments using a fisheye camera and a parametric human model.
In 2014 International Conference on Computer Vision Theory and Applications (VISAPP), volume 2, pages 470–477. IEEE, 2014.
b6
Christos Kyrkou and Theocharis Theocharides.
Deep-learning-based aerial image classification for emergency response applications using unmanned aerial vehicles.
In CVPR workshops, pages 517–525, 2019.
b28
Karen Simonyan and Andrew Zisserman.
Very deep convolutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556, 2014.
b29
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
b30
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam.
Mobilenets: Efficient convolutional neural networks for mobile vision applications.
arXiv preprint arXiv:1704.04861, 2017.
b3
Ilker Bozcan and Erdal Kayacan.
Au-air: A multi-modal unmanned aerial vehicle dataset for low altitude traffic surveillance.
In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 8504–8510. IEEE, 2020.
b31
Andrew Gilbert, Matthew Trumble, Charles Malleson, Adrian Hilton, and John Collomosse.
Fusing visual and inertial sensors with semantics for 3d human pose estimation.
International Journal of Computer Vision, 127:381–397, 2019.
b32
Joseph Redmon and Ali Farhadi.
Yolov3: An incremental improvement.
arXiv preprint arXiv:1804.02767, 2018.
b33
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen.
Mobilenetv2: Inverted residuals and linear bottlenecks.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510–4520, 2018.
b7
Lichao Mou, Yuansheng Hua, Pu Jin, and Xiao Xiang Zhu.
Era: A data set and deep learning benchmark for event recognition in aerial videos [software and data sets].
IEEE Geoscience and Remote Sensing Magazine, 8(4):125–133, 2020.
b34
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger.
Densely connected convolutional networks.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
b4
Ye Lyu, George Vosselman, Gui-Song Xia, Alper Yilmaz, and Michael Ying Yang.
Uavid: A semantic segmentation dataset for uav imagery.
ISPRS journal of photogrammetry and remote sensing, 165:108–119, 2020.
b5
Peng Wang, Bingliang Jiao, Lu Yang, Yifei Yang, Shizhou Zhang, Wei Wei, and Yanning Zhang.
Vehicle re-identification in aerial imagery: Dataset and approach.
In Proceedings of the IEEE/CVF international conference on computer vision, pages 460–469, 2019.
b15
Yihang Lou, Yan Bai, Jun Liu, Shiqi Wang, and Lingyu Duan.
Veri-wild: A large dataset and a new method for vehicle re-identification in the wild.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3235–3243, 2019.
b9
Georgios Albanis, Nikolaos Zioulis, Anastasios Dimou, Dimitrios Zarpalas, and Petros Daras.
Dronepose: photorealistic uav-assistant dataset synthesis for 3d pose estimation via a smooth silhouette loss.
In Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pages 663–681. Springer, 2020.
b10
Dan Oneata and Horia Cucu.
Kite: Automatic speech recognition for unmanned aerial vehicles.
arXiv preprint arXiv:1907.01195, 2019.
b11
Asanka G Perera, Yee Wei Law, and Javaan Chahl.
Uav-gesture: A dataset for uav control and gesture recognition.
In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, pages 0–0, 2018.
b12
Junjie Ye, Changhong Fu, Ziang Cao, Shan An, Guangze Zheng, and Bowen Li.
Tracker meets night: A transformer enhancer for uav tracking.
IEEE Robotics and Automation Letters, 7(2):3866–3873, 2022.
b13
Bowen Li, Changhong Fu, Fangqiang Ding, Junjie Ye, and Fuling Lin.
All-day object tracking for unmanned aerial vehicle.
IEEE Transactions on Mobile Computing, 22(8):4515–4529, 2022.
b14
Xin Zhao, Shiyu Hu, Yipei Wang, Jing Zhang, Yimin Hu, Rongshuai Liu, Haibin Ling, Yin Li, Renshu Li, Kun Liu, et al.
Biodrone: A bionic drone-based single object tracking benchmark for robust vision.
International Journal of Computer Vision, 132(5):1659–1684, 2024.
b8
Stefano Puliti, Grant Pearse, Peter Surovỳ, Luke Wallace, Markus Hollaus, Maciej Wielgosz, and Rasmus Astrup.
For-instance: a uav laser scanning benchmark dataset for semantic and instance segmentation of individual trees.
arXiv preprint arXiv:2309.01279, 2023.
b35
Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman.
The pascal visual object classes challenge: A retrospective.
International journal of computer vision, 111:98–136, 2015.
b36
Abhijit Kundu, Vibhav Vineet, and Vladlen Koltun.
Feature space optimization for semantic video segmentation.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3168–3175, 2016.
b37
Ju Yong Chang and Kyoung Mu Lee.
2d–3d pose consistency-based conditional random fields for 3d human pose estimation.
Computer Vision and Image Understanding, 169:52–61, 2018.
b39
Anh Nguyen and Bac Le.
3d point cloud segmentation: A survey.
In 2013 6th IEEE conference on robotics, automation and mechatronics (RAM), pages 225–230. IEEE, 2013.
b40
Xiu-Shen Wei, Chen-Lin Zhang, Lingqiao Liu, Chunhua Shen, and Jianxin Wu.
Coarse-to-fine: A rnn-based hierarchical attention model for vehicle re-identification.
In Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers, Part II 14, pages 575–591. Springer, 2019.
b41
Xiaobin Liu, Shiliang Zhang, Qingming Huang, and Wen Gao.
Ram: a region-aware deep model for vehicle re-identification.
In 2018 IEEE international conference on multimedia and expo (ICME), pages 1–6. IEEE, 2018.
b38
Anh Nguyen and Bac Le.
3d point cloud segmentation: A survey.
In 2013 6th IEEE conference on robotics, automation and mechatronics (RAM), pages 225–230. IEEE, 2013.
b42
Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur.
A time delay neural network architecture for efficient modeling of long temporal contexts.
In Interspeech, pages 3214–3218, 2015.
b43
Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al.
The kaldi speech recognition toolkit.
In IEEE 2011 workshop on automatic speech recognition and understanding. IEEE Signal Processing Society, 2011.
b44
Carl Vondrick, Donald Patterson, and Deva Ramanan.
Efficiently scaling up crowdsourced video annotation: A set of best practices for high quality, economical video labeling.
International journal of computer vision, 101:184–204, 2013.
b45
Guilhem Chéron, Ivan Laptev, and Cordelia Schmid.
P-cnn: Pose-based cnn features for action recognition.
In Proceedings of the IEEE international conference on computer vision, pages 3218–3226, 2015.
b46
Matthias Mueller, Neil Smith, and Bernard Ghanem.
A benchmark and simulator for uav tracking.
In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, pages 445–461. Springer, 2016.
b47
Siyi Li and Dit-Yan Yeung.
Visual object tracking for unmanned aerial vehicles: A benchmark and new motion models.
In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 2017.
b49
Lianghua Huang, Xin Zhao, and Kaiqi Huang.
Got-10k: A large high-diversity benchmark for generic object tracking in the wild.
IEEE transactions on pattern analysis and machine intelligence, 43(5):1562–1577, 2019.
b50
Christoph Mayer, Martin Danelljan, Danda Pani Paudel, and Luc Van Gool.
Learning target candidate association to keep track of what not to track.
In Proceedings of the IEEE/CVF international conference on computer vision, pages 13444–13454, 2021.
b51
João F Henriques, Rui Caseiro, Pedro Martins, and Jorge Batista.
High-speed tracking with kernelized correlation filters.
IEEE transactions on pattern analysis and machine intelligence, 37(3):583–596, 2014.
b58
Robin Strudel, Ricardo Garcia, Ivan Laptev, and Cordelia Schmid.
Segmenter: Transformer for semantic segmentation, 2021.
b59
Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri.
Learning spatiotemporal features with 3d convolutional networks.
In 2015 IEEE International Conference on Computer Vision (ICCV), pages 4489–4497, 2015.
b60
Jonathan Long, Evan Shelhamer, and Trevor Darrell.
Fully convolutional networks for semantic segmentation, 2015.
b61
Olaf Ronneberger, Philipp Fischer, and Thomas Brox.
U-net: Convolutional networks for biomedical image segmentation, 2015.
b62
Linjie Yang, Ping Luo, Chen Change Loy, and Xiaoou Tang.
A large-scale car dataset for fine-grained categorization and verification, 2015.
b63
Florian Schroff, Dmitry Kalenichenko, and James Philbin.
Facenet: A unified embedding for face recognition and clustering.
In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 815–823, 2015.
b64
Xinchen Liu, Wu Liu, Tao Mei, and Huadong Ma.
A deep learning-based approach to progressive vehicle re-identification for urban surveillance.
volume 9906, pages 869–884, 10 2016.
b65
Hongye Liu, Yonghong Tian, Yaowei Wang, Lu Pang, and Tiejun Huang.
Deep relative distance learning: Tell the difference between similar vehicles.
In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2167–2175, 2016.
b66
Yuhui Yuan, Kuiyuan Yang, and Chao Zhang.
Hard-aware deeply cascaded embedding, 2017.
b67
Zhedong Zheng, Liang Zheng, and Yi Yang.
Unlabeled samples generated by gan improve the person re-identification baseline in vitro, 2017.
b68
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros.
Unpaired image-to-image translation using cycle-consistent adversarial networks, 2020.
b69
Bugra Tekin, Sudipta N. Sinha, and Pascal Fua.
Real-time seamless single shot 6d object pose prediction, 2018.
b70
Hamid Rezatofighi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, and Silvio Savarese.
Generalized intersection over union: A metric and a loss for bounding box regression, 2019.
b52
Dimensions ai: The most advanced scientific research database, n.d.
Retrieved from <https://up42.com/blog/full-spectrum-multispectral-imagery-and-hyperspectral-imagery>.
b53
Keiron O'shea and Ryan Nash.
An introduction to convolutional neural networks.
arXiv preprint arXiv:1511.08458, 2015.
b55
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna.
Rethinking the inception architecture for computer vision.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
b56
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le.
Learning transferable architectures for scalable image recognition.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8697–8710, 2018.
b57
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le.
Learning transferable architectures for scalable image recognition.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8697–8710, 2018.
b71
Jiahui Yu, Hongwei Gao, Dalin Zhou, Jinguo Liu, Qing Gao, and Zhaojie Ju.
Deep temporal model-based identity-aware hand detection for space human–robot interaction.
IEEE Transactions on Cybernetics, 52(12):13738–13751, 2022.
b72
Zubair Saeed, Muhammad Haroon Yousaf, Rehan Ahmed, Sergio A Velastin, and Serestina Viriri.
On-board small-scale object detection for unmanned aerial vehicles (uavs).
Drones, 7(5):310, 2023.
b73
Rahee Walambe, Aboli Marathe, and Ketan Kotecha.
Multiscale object detection from drone imagery using ensemble transfer learning.
Drones, 5(3):66, 2021.
b74
Himanshu Gupta and Om Prakash Verma.
Monitoring and surveillance of urban road traffic using low altitude drone images: a deep learning approach.
Multimedia Tools and Applications, 81(14):19683–19703, 2022.
b80
Nikolaos Ioannis Bountos, Arthur Ouaknine, and David Rolnick.
Fomo-bench: a multi-modal, multi-scale and multi-task forest monitoring benchmark for remote sensing foundation models, 2024.
b81
Cailian Zhang et al.
Individual tree segmentation from uas lidar data based on hierarchical filtering and clustering.
International Journal of Digital Earth, 17(1):2356124, 2024.
b88
Georgios Nikolaos Albanis, Nikolaos Zioulis, Anargyros Chatzitofis, Anastasios Dimou, Dimitrios Zarpalas, and Petros Daras.
On end-to-end 6dof object pose estimation and robustness to object scale.
In ML Reproducibility Challenge 2020, 2021.
b89
Bo Chen, Alvaro Parra, Jiewei Cao, Nan Li, and Tat-Jun Chin.
End-to-end learnable geometric vision by backpropagating pnp optimization.
In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8100–8109, 2020.
b90
Bowen Cheng, Bin Xiao, Jingdong Wang, Honghui Shi, Thomas S Huang, and Lei Zhang.
Higherhrnet: Scale-aware representation learning for bottom-up human pose estimation.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5386–5395, 2020.
b91
Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang.
Deep high-resolution representation learning for human pose estimation.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5693–5703, 2019.
b75
Christos Kyrkou and Theocharis Theocharides.
Emergencynet: Efficient aerial image classification for drone-based emergency monitoring using atrous convolutional feature fusion.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13:1687–1699, 2020.
b76
Fatma S Alrayes, Saud S Alotaibi, Khalid A Alissa, Mashael Maashi, Areej Alhogail, Najm Alotaibi, Heba Mohsen, and Abdelwahed Motwakel.
Artificial intelligence-based secure communication and classification for drone-enabled emergency monitoring systems.
Drones, 6(9):222, 2022.
b77
Changhong Fu, Liangliang Yao, Haobo Zuo, Guangze Zheng, and Jia Pan.
Sam-da: Uav tracks anything at night with sam-powered domain adaptation, 2024.
b78
Changhong Fu, Kunhan Lu, Guangze Zheng, Junjie Ye, Ziang Cao, Bowen Li, and Geng Lu.
Siamese object tracking for unmanned aerial vehicle: A review and comprehensive analysis, 2022.
b82
Usman Azmat, Saud S. Alotaibi, Naif Al Mudawi, Bayan Ibrahimm Alabduallah, Mohammed Alonazi, Ahmad Jalal, and Jeongmin Park.
An elliptical modeling supported system for human action deep recognition over aerial surveillance.
IEEE Access, 11:75671–75685, 2023.
b83
Wei Lin, Leonid Karlinsky, Nina Shvetsova, Horst Possegger, Mateusz Kozinski, Rameswar Panda, Rogerio Feris, Hilde Kuehne, and Horst Bischof.
Match, expand and improve: Unsupervised finetuning for zero-shot action recognition with language knowledge, 2023.
b85
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.
Learning transferable visual models from natural language supervision, 2021.
b86
Hanoona Rasheed, Muhammad Uzair Khattak, Muhammad Maaz, Salman Khan, and Fahad Shahbaz Khan.
Fine-tuned clip models are efficient video learners, 2023.
b84
Zengxi Huang, Yusong Qin, Xiaobing Lin, Tianlin Liu, Zhenhua Feng, and Yiguang Liu.
Motion-driven spatial and temporal adaptive high-resolution graph convolutional networks for skeleton-based action recognition.
IEEE Transactions on Circuits and Systems for Video Technology, 33(4):1868–1883, 2023.
b87
Zesheng Hu, Zihao Pan, Qiang Wang, Lei Yu, and Shumin Fei.
Forward-reverse adaptive graph convolutional networks for skeleton-based action recognition.
Neurocomput., 492(C):624–636, jul 2022.
b92
Jiawen Zhu, Huayi Tang, Zhi-Qi Cheng, Jun-Yan He, Bin Luo, Shihao Qiu, Shengming Li, and Huchuan Lu.
Dcpt: Darkness clue-prompted tracking in nighttime uavs, 2024.
b116
Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte.
Learning discriminative model prediction for tracking.
In Proceedings of the IEEE/CVF international conference on computer vision, pages 6182–6191, 2019.
b93
Junjie Ye, Changhong Fu, Guangze Zheng, Ziang Cao, and Bowen Li.
Darklighter: Light up the darkness for uav tracking.
In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3079–3085, 2021.
b100
Changhong Fu, Ziang Cao, Yiming Li, Junjie Ye, and Chen Feng.
Siamese anchor proposal network for high-speed aerial tracking, 2021.
b115
Junjie Ye, Changhong Fu, Ziang Cao, Shan An, Guangze Zheng, and Bowen Li.
Tracker meets night: A transformer enhancer for uav tracking.
IEEE Robotics and Automation Letters, 7(2):3866–3873, 2022.
b113
Syeda Nyma Ferdous, Xin Li, and Siwei Lyu.
Uncertainty aware multitask pyramid vision transformer for uav-based object re-identification.
In 2022 IEEE International Conference on Image Processing (ICIP), pages 2381–2385. IEEE, 2022.
b114
Shuoyi Chen, Mang Ye, and Bo Du.
Rotation invariant transformer for recognizing object in uavs.
In Proceedings of the 30th ACM International Conference on Multimedia, pages 2565–2574, 2022.
b95
Usman Azmat, Saud S. Alotaibi, Maha Abdelhaq, Nawal Alsufyani, Mohammad Shorfuzzaman, Ahmad Jalal, and Jeongmin Park.
Aerial insights: Deep learning-based human action recognition in drone imagery.
IEEE Access, 11:83946–83961, 2023.
b117
Guilhem Chéron, Ivan Laptev, and Cordelia Schmid.
P-cnn: Pose-based cnn features for action recognition.
In Proceedings of the IEEE international conference on computer vision, pages 3218–3226, 2015.
b94
Christos Papaioannidis, Dimitrios Makrygiannis, Ioannis Mademlis, and Ioannis Pitas.
Learning fast and robust gesture recognition.
In 2021 29th European Signal Processing Conference (EUSIPCO), pages 761–765, 2021.
b118
Fan Yang, Yang Wu, Sakriani Sakti, and Satoshi Nakamura.
Make skeleton-based action recognition model smaller, faster and better.
In Proceedings of the 1st ACM International Conference on Multimedia in Asia, pages 1–6, 2019.
b98
Libo Wang, Rui Li, Dongzhi Wang, Chenxi Duan, Teng Wang, and Xiaoliang Meng.
Transformer meets convolution: A bilateral awareness network for semantic segmentation of very fine resolution urban scene images, 2022.
b103
Rui Li, Chenxi Duan Libo Wang, Ce Zhang, and Shunyi Zheng.
A2-fpn for semantic segmentation of fine-resolution remotely sensed images.
International Journal of Remote Sensing, 43(3):1131–1155, 2022.
b104
Libo Wang, Rui Li, Ce Zhang, Shenghui Fang, Chenxi Duan, Xiaoliang Meng, and Peter M Atkinson.
Unetformer: A unet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery.
ISPRS Journal of Photogrammetry and Remote Sensing, 190:196–214, 2022.
b107
Michael Ying Yang, Saumya Kumaar, Ye Lyu, and Francesco Nex.
Real-time semantic segmentation with context aggregation network.
ISPRS journal of photogrammetry and remote sensing, 178:124–134, 2021.
b119
Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang.
Bisenet: Bilateral segmentation network for real-time semantic segmentation.
In Proceedings of the European conference on computer vision (ECCV), pages 325–341, 2018.
b108
Dechao Meng, Liang Li, Xuejing Liu, Yadong Li, Shijie Yang, Zheng-Jun Zha, Xingyu Gao, Shuhui Wang, and Qingming Huang.
Parsing-based view-aware embedding network for vehicle re-identification.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7103–7112, 2020.
b120
Yihang Lou, Yan Bai, Jun Liu, Shiqi Wang, and Lingyu Duan.
Veri-wild: A large dataset and a new method for vehicle re-identification in the wild.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3235–3243, 2019.
b109
Lingxiao He, Xingyu Liao, Wu Liu, Xinchen Liu, Peng Cheng, and Tao Mei.
Fastreid: A pytorch toolbox for general instance re-identification.
In Proceedings of the 31st ACM International Conference on Multimedia, pages 9664–9667, 2023.
b121
Saghir Alfasly, Yongjian Hu, Haoliang Li, Tiancai Liang, Xiaofeng Jin, Beibei Liu, and Qingli Zhao.
Multi-label-based similarity learning for vehicle re-identification.
IEEE Access, 7:162605–162616, 2019.
b110
Fei Shen, Yi Xie, Jianqing Zhu, Xiaobin Zhu, and Huanqiang Zeng.
Git: Graph interactive transformer for vehicle re-identification.
IEEE Transactions on Image Processing, 32:1039–1051, 2023.
b122
Xinchen Liu, Wu Liu, Jinkai Zheng, Chenggang Yan, and Tao Mei.
Beyond the parts: Learning multi-view cross-part correlation for vehicle re-identification.
In Proceedings of the 28th ACM international conference on multimedia, pages 907–915, 2020.
b111
Fei Shen, Jianqing Zhu, Xiaobin Zhu, Yi Xie, and Jingchang Huang.
Exploring spatial significance via hybrid pyramidal graph network for vehicle re-identification.
IEEE Transactions on Intelligent Transportation Systems, 23(7):8793–8804, 2021.
b123
Ratnesh Kuma, Edwin Weill, Farzin Aghdasi, and Parthasarathy Sriram.
Vehicle re-identification: an efficient baseline using triplet embedding.
In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1–9. IEEE, 2019.
b112
Pirazh Khorramshahi, Neehar Peri, Jun-cheng Chen, and Rama Chellappa.
The devil is in the details: Self-supervised attention for vehicle re-identification.
In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16, pages 369–386. Springer, 2020.
b124
Hao Luo, Youzhi Gu, Xingyu Liao, Shenqi Lai, and Wei Jiang.
Bag of tricks and a strong baseline for deep person re-identification.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 0–0, 2019.
b102
Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao.
Yolov4: Optimal speed and accuracy of object detection, 2020.
b101
Joseph Redmon and Ali Farhadi.
Yolov3: An incremental improvement, 2018.
b79
Junjie Ye, Changhong Fu, Guangze Zheng, Danda Pani Paudel, and Guang Chen.
Unsupervised domain adaptation for nighttime aerial tracking, 2022.
b99
Ziang Cao, Changhong Fu, Junjie Ye, Bowen Li, and Yiming Li.
Siamapn++: Siamese attentional aggregation network for real-time uav tracking.
In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3086–3092, 2021.
b97
Fan Yang, Sakriani Sakti, Yang Wu, and Satoshi Nakamura.
Make skeleton-based action recognition model smaller, faster and better, 2020.
b96
Guilhem Chéron, Ivan Laptev, and Cordelia Schmid.
P-cnn: Pose-based cnn features for action recognition.
In 2015 IEEE International Conference on Computer Vision (ICCV), pages 3218–3226, 2015.
b105
Rui Li, Shunyi Zheng, Ce Zhang, Chenxi Duan, Libo Wang, and Peter M Atkinson.
Abcnet: Attentive bilateral contextual network for efficient semantic segmentation of fine-resolution remotely sensed imagery.
ISPRS journal of photogrammetry and remote sensing, 181:84–98, 2021.
b106
Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, and Ashish Vaswani.
Bottleneck transformers for visual recognition, 2021.
§ APPENDIX
The following images were captured from the papers in which they were presented as a new dataset or from the dataset repositories referenced in their paper where they were made available as public dataset repositories.
§.§ AIDER
§.§ BioDrone
§.§ ERA
§.§ FOR-instance
§.§ UAVDark135
§.§ UAV-Human
§.§ UAVid
§.§ DarkTrack2021
§.§ VRAI
§.§ VERI-Wild
§.§ RescueNet
§.§ UAV-Assistant
§.§ AU-AIR
§.§ UAV-Gesture
§.§ Kite
|
http://arxiv.org/abs/2409.02862v1 | 20240904164156 | Four-dimensional phase space tomography from one-dimensional measurements in a high-power hadron ring | [
"Austin Hoover"
] | physics.acc-ph | [
"physics.acc-ph"
] |
[email protected]
Oak Ridge National Laboratory, Oak Ridge, Tennessee 37830, USA
§ ABSTRACT
In this paper, we use one-dimensional measurements to infer the four-dimensional phase space density of an accumulated 1 GeV proton beam in the Spallation Neutron Source (SNS) accelerator. The reconstruction was performed using MENT, an exact maximum-entropy tomography algorithm, and thus represents the most reasonable inference from the data. The reconstructed distribution reproduces the measured profiles with the same dynamic range as the measurement devices, and simulations indicate that the problem is well-constrained. Similar measurements could serve as benchmarks for simulations of intense, coupled beam dynamics in the SNS or other hadron rings.
Four-dimensional phase space tomography from one-dimensional measurements in a high-power hadron ring
Austin Hoover
September 9, 2024
=====================================================================================================
§ INTRODUCTION
The evolution of an intense hadron beam in an accelerator depends strongly on its initial distribution in six-dimensional phase space. There is typically uncertainty attached to the distribution because conventional diagnostics measure one or two dimensions at a time. Direct high-dimensional phase space imaging could address this problem in hadron linacs <cit.>. In hadron rings, however, the beam energy and intensity preclude direct measurements. High-dimensional information could nonetheless benefit certain applications in hadron rings. For example, proposed methods such as eigenpainting <cit.> and nonlinear integrable optics <cit.> generate intense beams with strong interplane correlations in coupled focusing systems. Four-dimensional information could also be used to reliably predict the two-dimensional density on high-power targets in operational accelerators.
An alternative to direct phase space imaging is phase space tomography, where the phase space density is inferred from low-dimensional views. Recent experiments have demonstrated four-dimensional tomography from two-dimensional measurements in electron accelerators <cit.>, as well as in H- linacs using laser-wire diagnostics <cit.>. Two-dimensional measurements are not typically available in hadron rings, where the primary diagnostics are wire scanners, which record the beam density on a one-dimensional axis in the transverse plane.[The two-dimensional beam image on a high-power target can be viewed by coating the target with a luminescent material. A long optical fiber system is required to view the images because of the high radiation levels near the target <cit.>. We did not attempt to infer the four-dimensional density from such images. Strict requirements on the beam shape on the target make it difficult to perform the necessary quadrupole scans. Additionally, the images are noisy and degrade over the target lifetime.] It is not a priori obvious whether a realistic number of one-dimensional views can constrain a four-dimensional distribution, nor whether the required views can be obtained in a real machine.
In this paper, we use wire-scanner measurements to reconstruct the four-dimensional phase space density of an accumulated 1 GeV proton beam in the Spallation Neutron Source (SNS) accelerator. We follow the work of Minerbo, Sander, and Jameson <cit.>, who applied four-dimensional tomography to one-dimensional data in a low-energy hadron linac nearly 45 years ago. The reconstruction utilizes cross-plane information from diagonal wires mounted alongside horizontal and vertical wires on each device. We reconstruct the distribution in two steps: first, we fit the (overdetermined) covariance matrix of second-order moments to the measured rms beam sizes; second, we incorporate the covariance matrix in a prior distribution and maximize the distribution's entropy relative to this prior, subject to the measurement constraints. The reconstructed distribution reproduces the measured profiles with the same dynamic range as the measurement devices, and simulations indicate that the problem is well-constrained. Similar measurements could serve as benchmarks for simulations of intense, coupled beam dynamics in the SNS or other hadron rings.
§ EXPERIMENT
Our experiment was conducted at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL), shown in Fig. <ref>. The SNS bombards a liquid mercury target with intense proton pulses at 60 Hz repetition rate <cit.>. Each pulse is the superposition of 1000 “minipulses” injected from a 400-meter linac into a 250-meter storage ring at 1.3 GeV kinetic energy. The final pulse contains approximately 1.4 × 10^14 protons, corresponding to 1.7 MW beam power. The accumulated beam is extracted and transported from the ring to the target in a 150-meter Ring-Target Beam Transport (RTBT).
The diagnostics in the ring are limited to beam position monitors (BPMs) and beam loss monitors (BLMs). There are additional diagnostics in the RTBT; we focus on four wire scanners just before the target.
Each wire scanner consists of three 100-μm tungsten wires mounted on a fork at forty-five-degree angles relative to each other. The secondary electron emission from each wire gives the beam density on the horizontal (x), vertical (y), and diagonal (u) axis. Each wire scanner operates at 1 Hz repetition rate; thus, each point in the measured profile corresponds to a different beam pulse. A single measurement takes approximately five minutes, including the return to the initial wire position; however, the four wire scanners run in parallel, so each scan generates twelve profiles.
Fig. <ref> shows the locations of the wire scanners and nearby quadrupole magnets Fig. <ref>. Two power supplies control the first eight quadrupoles, while the remaining quadrupoles before the target have independent power supplies. We aim to reconstruct the four-dimensional phase space distribution at a point before the first varied quadrupole (QH18).
Given a finite number of measurements, it is not entirely clear how to find the optimal set of views—those that place the tightest constraints on the unknown four-dimensional density. Nor is it entirely clear how many views are required to reconstruct the phase space density with sufficient accuracy. These questions deserve further study, especially when the focusing system has coupled or nonlinear elements. Here, we draw on previous experiments in electron accelerators that attempted to reconstruct the four-dimensional phase space distribution from two-dimensional projections onto the x-y plane in an uncoupled focusing system. Our focusing system is also uncoupled, and each wire scanner provides three one-dimensional projections that constrain the density in the x-y plane. Thus, we hypothesize that observations in these experiments are relevant to us.
Hock and Wolski <cit.> discovered a near closed-form solution for the four-dimensional phase space density from a set of two-dimensional measurements. The solution assumes independent rotations in the x-x' and y-y' planes by the phase advances μ_x and μ_y. In other words, in a normalized frame, the transfer matrix takes the form
M_i =
[ -cosμ_x_i sinμ_x_i 0 0; -sinμ_x_i cosμ_x_i 0 0; 0 0 -cosμ_y_i sinμ_y_i; 0 0 -sinμ_y_i cosμ_y_i ].
Sampling the entire μ_x-μ_y plane leads to an exact solution for f(x, x', y, y'). A lengthy nested scan utilizing this principle has been demonstrated <cit.>; however, accurate reconstructions appear to be possible without covering the entire range of phase advances. Examples include holding one phase advance fixed while the other varies <cit.> or varying both phases in a single quadrupole scan <cit.>. These studies suggest that μ_x, μ_y, and μ_x - μ_y should cover a range of values between 0 and π.
The constraints in the SNS limit our control of the phase advances at each wire scanner. In <cit.>, we found that the nominal optics produce little variation in μ_x - μ_y, leading to an ill-conditioned system when fitting the covariance matrix. We found a second set of optics that led to a better-conditioned problem; in the second set of optics, μ_x and μ_y at the last wire scanner (WS24) were shifted by 45 degrees in opposite directions relative to their nominal values. In the present study, we added a third set of optics with different phase advances at WS24. The phase advances at each wire scanner over the three sets of optics are plotted in Fig. <ref>. We found that this set of optics leads to accurate four-dimensional phase space density reconstructions when tested with known initial distributions.
We measured a beam generated by a nonstandard injection procedure intended to generate strong cross-plane correlations in the beam. (The standard SNS painting method naturally washes out cross-plane correlations during injection.) We collected beam profiles for three sets of optics in approximately fifteen minutes. Each of the 36 profiles was centered and thresholded to remove background noise. We modeled the accelerator lattice (which contains only drifts and quadrupoles) as a linear transformation of the phase space coordinates. We computed the transfer matrices from the OpenXAL <cit.> online model of the SNS accelerator.
§ RECONSTRUCTION RESULTS
As detailed in Appendix <ref>, we performed the reconstruction in two steps. First, we fit the overdetermined covariance matrix to the measured rms beam sizes using Linear Least Squares (LLSQ). Second, we used this covariance matrix to define a Gaussian prior distribution and used the MENT algorithm <cit.> to maximize the entropy relative to this prior. The result is the most conservative inference from the data.
Fig. <ref> shows the LLSQ fit of the covariance matrix to the measured rms beam size on each wire. The fit reproduces the rms beam sizes on all wires in all scan steps. The tightly fit ellipses over a range of phase advances in the x-y' and y-y' planes indicate a well-modeled lattice and small measurement errors. Standard LLSQ error propagation gives uncertainties of a few percent for the cross-plane covariances.
With a Gaussian prior based on the measured covariance matrix, the MENT algorithm converged to the linear-scale profiles in one iteration and the log-scale profiles in another two iterations, terminating at a mean absolute error of 10^-4 per profile. Fig. <ref> displays the measured and simulated beam profiles. We would like to highlight the agreement of the profiles down to the 10^-3 level; with improved dynamic range in the wire scanner profiles, MENT may be able to study halo formation processes in two- or four-dimensional phase space. (We do not know if the measured asymmetric tails are real particles or if they are due to background or cross-talk between wires.)
Fig. <ref> shows the marginal projections of the reconstructed four-dimensional phase space distribution.
The nonstandard painting method used in this study ideally generates a uniform density beam with strong linear cross-plane correlations; space charge and other effects blur these correlations and generate a nonuniform density. The remaining linear correlations would be significant if this painting scheme were used for neutron production. In Fig. <ref>, we see that the average beam density is larger than expected from an uncorrelated beam with the same x-x' and y-y' distribution; the accelerator optics would need to be optimized to meet the target specifications. Further examination of the distribution is beyond the scope of this paper.
The reconstruction quality depends on the accuracy of the forward model, including the measurement process. Our model represents the beamline as a 4 × 4 transfer matrix:
M =
[ M_xx M_xy; M_yx M_yy ],
where M_xx transforms the x-x' coordinates, M_yy transforms the y-y' coordinates, and M_xy and M_yx couple the horizontal and vertical planes. We assume M_xy = M_yx = 0 because there are no coupled elements in the RTBT. The following are potential sources of error. (i) The true quadruple coefficients may not be equal to the readback values. We assume these values are accurate because measured twiss parameters typically agree with design values <cit.>; in future studies, orbit-difference measurements could use BPM data to confirm the linear model accuracy. (ii) Tilted quadrupoles would generate off-block-diagonal terms in the transfer matrix. We expect skew quadrupole components to be negligible, as they can be measured using BPM data <cit.> and would result in a tilted image on the target during neutron production. (iii) An incorrect beam energy would cause a systematic error in the quadrupole focusing strengths. We ignore this source of error because the beam energy is known to within 0.1% from time of flight measurements in the ring <cit.>. (iv) The quadrupole coefficients depend on the particle energy—each particle sees a slightly different transverse focusing force—but our model assumes the beam is monoenergetic. Strong transverse-longitudinal correlations in the bunch could amplify the error. The energy spread in the SNS is only a few MeV on top of a 1 GeV synchronous particle energy, so we expect that our model is sufficient. (v) The model does not include space charge. The tune shift over one turn in the ring is approximately 3%, too small to make a difference in the RTBT wire scanner region <cit.>. (vi) The wire scanner measurements assume the distribution has no pulse-to-pulse variation. Back-to-back measurements show almost no change in the profiles. An electron scanner <cit.> could also be used to validate this assumption in a future experiment.
It is important to note that tomography is an inverse problem with no unique solution, even with a perfect forward model and noiseless data. MENT returns the most reasonable inference from the data and prior but does not guarantee that the data are informative. (MENT returns the most reasonable solution even in the absence of data.) No existing high-dimensional reconstruction algorithms report uncertainty, so we rely heavily on reconstructions from fake data sets generated by simulated beams that reflect realistic conditions at the reconstruction point. To this end, we generated a test beam using a PyORBIT <cit.> simulation with parameters qualitatively similar to our experiment. (The simulation was reported in <cit.>.) We rescaled the distribution to match our measured covariance matrix and transported it to the wire scanners using the same transfer matrices as in the experiment. Fig. <ref> shows the test results. The reconstructed two-dimensional marginal distributions agree with the true two-dimensional distributions in both the linear correlations (as expected) and nonlinear correlations between planes. This test gives us confidence that MENT generates a reliable inference from the data but also highlights the need for robust uncertainty quantification techniques in real experiments.
§ CONCLUSION
We have demonstrated four-dimensional phase space tomography using one-dimensional measurements in a high-power hadron ring. We performed the reconstruction using the MENT algorithm with a Gaussian prior defined by the best-fit covariance matrix. The reconstructed distribution reproduces the measured profiles with the same dynamic range as the measurement devices. Test reconstructions with a known ground truth indicate a reasonably small reconstruction uncertainty.
An immediate use for SNS operations is to predict the two-dimensional beam density on the mercury spallation target <cit.> or planned Second Target Station <cit.>. A straightforward extension in the SNS is to measure the four-dimensional density as a function of time by extracting the beam on different turns during injection. Such measurements could serve as benchmarks for computer simulations of intense, coupled beam dynamics in hadron rings. Finally, an electron scanner <cit.> could provide turn-by-turn profiles in the ring; it is unclear if one could use this data to infer the two- or four-dimensional phase space density.
§ ACKNOWLEDGMENTS
I would like to thank Wim Blockland (ORNL) for troubleshooting the RTBT wire scanners, the SNS accelerator operators for enabling these studies, and Andrei Shishlo (ORNL) for carefully reading this manuscript.
This manuscript has been authored by UT Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
§ RECONSTRUCTION METHOD
We aim to infer the probability density function f(v) from a set of one-dimensional projections, where v = (v_1, … , v_N) is the phase space coordinate vector. (In four-dimensional phase space, v = (x, x', y, y'), where x and y are the positions and x' and y' are the momenta.) We assume the ith projection is measured after a linear transformation
w_i = M_i v,
where M_i is an N × N matrix representing the accelerator transport between the reconstruction point and the measurement device. We measure the one-dimensional projected density g_i(w_ij), where w_ij is the jth dimension of w_i. Thus, the distribution must satisfy the following constraints:
g_i(w_ij) - ∫
f(v(w_i))
∏_k j dw_ik = 0
Entropy maximization (MaxEnt) provides a logically consistent way to incorporate prior information in the reconstruction and identify a unique solution from the many that satisfy Eq. (<ref>). MaxEnt updates a prior distribution f_*(v) to a posterior by maximizing the relative entropy
S[f(v), f_*(v)]
=
-
∫ f(v)
log(
f(v)/f_*(v))
dv,
subject to Eq. (<ref>). The method of Lagrange multipliers leads to the following form of the posterior:
f(v) = f_*(v) ∏_iexp ( λ_i(w_ij) ),
where λ_i(w_ij) are unknown Lagrange multiplier functions. The MENT algorithm uses a nonlinear Gauss-Seidel relaxation method to find λ_i such that the distribution in Eq. (<ref>) satisfies the constraints in Eq. (<ref>). We use a sampling-based method to implement the Gauss-Seidel iterations; this will be reported in a separate paper.
An advantage of the MENT approach is its incorporation of external information through f_*(v). We note that the N × N covariance matrix Σ = ⟨vv^T⟩ is overdetermined by the measured beam profiles. Thus, we assume a Gaussian prior of the form
f_*(v) =
[
1/(2π)^N |Σ̂|]^1/2exp(
-1/2v^T Σ̂^-1v),
where Σ̂ is the best-fit covariance matrix determined by the following procedure. Under the symplectic linear transformation M_i, Σ transforms as
Σ→𝐌_i Σ𝐌_i^T.
Eq. (<ref>) linearly relates the measured rms beam size η_i = ⟨ w_ij^2 ⟩ to the lower-triangular elements σ of the initial covariance matrix. Stacking the measurements into a single vector η = (η_1, η_2, …) gives the following system of equations:
Aσ = η.
We generate a solution σ̂ (and therefore Σ̂) to Eq. (<ref>) using ordinary linear least squares (LLSQ) <cit.>:
σ̂ = (A^TA)^-1A^T η.
With the LLSQ fit in hand, we may run the MENT algorithm to reconstruct the beam distribution in normalized phase space coordinates,
z = T^-1v,
such that ⟨zz^T ⟩ = I. When reconstructing in normalized two-dimensional phase space, the projection angles in the x-x' plane are equivalent to the betatron phase advances; this is advantageous because the phase advances between measurement locations are often approximately equal <cit.>. In practice, reconstructing in normalized space involves only appending the unnormalizing matrix T to each transfer matrix:
M_i →M_i T.
|
http://arxiv.org/abs/2409.03507v1 | 20240905132010 | A Physics-Informed Machine Learning Approach for Solving Distributed Order Fractional Differential Equations | [
"Alireza Afzal Aghaei"
] | cs.LG | [
"cs.LG",
"cs.NA",
"math.NA"
] |
[email protected]
[cor1]Corresponding author
[fn1]Independent researcher, Isfahan, Iran
§ ABSTRACT
This paper introduces a novel methodology for solving distributed-order fractional differential equations using a physics-informed machine learning framework. The core of this approach involves extending the support vector regression (SVR) algorithm to approximate the unknown solutions of the governing equations during the training phase. By embedding the distributed-order functional equation into the SVR framework, we incorporate physical laws directly into the learning process. To further enhance computational efficiency, Gegenbauer orthogonal polynomials are employed as the kernel function, capitalizing on their fractional differentiation properties to streamline the problem formulation. Finally, the resulting optimization problem of SVR is addressed either as a quadratic programming problem or as a positive definite system in its dual form. The effectiveness of the proposed approach is validated through a series of numerical experiments on Caputo-based distributed-order fractional differential equations, encompassing both ordinary and partial derivatives.
Distributed Order Differential Equations Fractional Calculus Least-Squares Support Vector Regression Gegenbauer Polynomials
§ INTRODUCTION
Machine learning, a critical branch of artificial intelligence (AI), is transforming modern research and industry by enabling data-driven decision-making and predictive analytics. Central to machine learning is regression analysis, a fundamental tool that models the relationship between variables <cit.>. This technique is essential not only for making predictions but also for uncovering underlying patterns within data. Through methods ranging from simple linear regression to more sophisticated approaches like regularization and kernel-based techniques, regression provides the foundation for both straightforward and complex data modeling tasks.
Physics-informed machine learning (PIML) is an emerging field that integrates physical laws and principles into machine learning frameworks to enhance model accuracy and reliability, particularly in scientific and engineering applications <cit.>. In PIML, regression plays a key role in approximating the solutions to both forward and inverse problems. Forward problems involve predicting system behavior based on known inputs, while inverse problems aim to infer unknown inputs from observed outputs <cit.>. Regression techniques are employed to approximate the solution space, ensuring that the model adheres to known physical laws, such as the conservation of energy or mass. By embedding these constraints, PIML models can achieve higher accuracy and robustness, making them especially valuable in scenarios where data is sparse or noisy, and where traditional numerical methods might struggle. This approach is particularly useful in fields like fluid dynamics, material science, and climate modeling, where the interplay between data-driven insights and physical principles is crucial for making reliable predictions <cit.>.
In tackling physics-informed machine learning tasks, various machine learning regression techniques, such as Extreme Learning Machines (ELM), Support Vector Regression (SVR), and neural networks, are employed to model complex systems <cit.>. ELMs are known for their speed and efficiency in training, as they utilize a random selection of hidden nodes and require no iterative tuning. Neural networks, with their ability to learn complex, non-linear relationships, are powerful but often require significant computational resources and large datasets to achieve high accuracy. However, among these techniques, SVR stands out due to its unique property of maximizing the margin of the data, which leads to better generalization and higher accuracy in solving problems, particularly in PIML tasks. SVR's ability to maintain high precision even with small datasets makes it highly effective for regression tasks in PIML, where data might be limited or noisy <cit.>.
SVR has demonstrated exceptional accuracy in solving regression tasks, prompting the development of various extensions aimed at enhancing its performance and applicability. Notable among these are Twin Support Vector Machines, Fuzzy Support Vector Machines, and Least Squares Support Vector Regression (LSSVR). The LSSVR algorithm, in particular, replaces the conventional loss function with a squared loss, simplifying the mathematical formulation and significantly speeding up the training process. This efficiency makes LSSVR especially suitable for large-scale and complex problems where computational resources are a concern. Recently, LSSVR has gained considerable attention for its ability to solve forward physics-informed mathematical problems, including ordinary and partial differential equations, integral equations, fractional differential and integro-differential equations, delay problems, and differential algebraic equations. The timeline in <ref> provides a comprehensive overview of the evolution and application of the LSSVR method across these domains, illustrating its growing prominence in the field.
In recent years, as reflected in the table, the focus of research has increasingly shifted toward fractional problems, highlighting the growing importance of fractional calculus—a branch of calculus that extends the concept of derivatives and integrals to non-integer orders. Fractional differential operators, such as the Caputo, Riemann-Liouville, and Grünwald-Letnikov derivatives, are at the forefront of this research due to their ability to model memory and hereditary properties inherent in many physical systems <cit.>. Among these, the Caputo derivative is particularly notable for its application in initial value problems, as it allows for the inclusion of traditional boundary conditions, making it more suitable for real-world applications <cit.>.
A review of the literature reveals that Caputo-based fractional differential equations (FDEs) have been extensively studied for their use in modeling viscoelastic materials, anomalous diffusion, and complex dynamic systems <cit.>. Variable order and distributed order fractional differential equations are two important generalizations of fractional differential equations that have gained increasing attention in recent years <cit.>. Variable order fractional differential equations (VOFDEs) involve fractional derivatives or integrals whose orders are functions of time, space, or other variables, rather than constants. Meanwhile, distributed order fractional differential equations (DOFDEs) involve an integration of fractional derivatives over a range of orders. These generalizations allow for more flexible modeling of complex systems with memory effects that vary over time or involve multiple scales. VOFDEs can capture phenomena where the memory effect changes with time or other variables, while DOFDEs can model systems with a continuous spectrum of time scales or memory effects <cit.>.
As the application of FDEs has expanded, solving these complex problems has garnered significant attention. Given that analytical solutions are often intractable, researchers have developed various efficient numerical methods to address these challenges. These include spectral methods, meshless methods, and finite difference schemes, each tailored to efficiently and accurately solve advanced FDEs <cit.>. Meshless methods, for instance, utilize radial basis functions to approximate solutions without the need for a predefined mesh, making them particularly flexible for complex geometries <cit.>. Spectral methods, on the other hand, rely on orthogonal polynomials such as Chebyshev polynomials, Legendre polynomials, and Jacobi polynomials, which are well-suited for problems with smooth solutions and often lead to more accurate results due to their exponential convergence properties <cit.>. In the context of machine learning, these orthogonal polynomials serve as basis functions, analogous to the concept of a feature map, which is used to transform input data into a higher-dimensional space. This transformation is integral to the function of kernel methods in machine learning, particularly in SVR. The kernel function, central to SVR, computes the inner product of these basis functions (or feature maps) in the transformed space, enabling the formulation of the regression problem in a way that captures complex, nonlinear relationships. This approach not only facilitates the handling of high-dimensional data but also enhances the accuracy and generalization capability of the model, making kernel functions a powerful tool in both numerical analysis of FDEs and machine learning applications <cit.>.
In this paper, we propose the use of Gegenbauer polynomials as the kernel function within the physics-informed machine learning form of the LSSVR framework for solving forward forms of distributed-order fractional differential equations. Gegenbauer polynomials generalize Legendre and Chebyshev polynomials and possess unique properties that make them well-suited for approximating solutions to FDEs. By leveraging the fractional differentiation properties of these polynomials, our method simplifies the problem formulation and improves computational efficiency, offering a robust and accurate solution to the complexities associated with distributed-order fractional differential equations. Specifically, our contribution is as follows:
* Deriving an LS-SVR method for distributed-order fractional differential equations.
* Employing Gegenbauer polynomials as the kernel function in LSSVR.
* Simulating one-dimensional DOFDEs using the proposed approach.
* Solving DOFDEs with partial derivatives using the developed framework.
* Performing hyperparameter tuning and sensitivity analysis on the Gegenbauer parameter during the numerical solution process.
The remainder of the article is structured as follows: Section 2 covers the prerequisites related to this study. In Section 3, we derive the LS-SVR approach for DOFDEs. Section 4 presents examples, numerical results, comparison tables, and figures related to these results. Finally, in Section 5, we discuss the significant impact of the proposed method on achieving high accuracy in solving these problems.
@llll p4.5cm@
A timeline on LS-SVR method for solving various types of functional equations such as Fractional, Partial Differential Equations (PDE), Ordinary Differential Equations (ODE), Systems of Ordinary Differential Equations (Sys. ODE), Systems of Integral Equations (Sys. IE), Integral Equations (IE), Volterra Integral Equations (VIE), Volterra-Fredholm Integral Equations (VFIE), Fredholm Integral Equations (FIE), Inverse Partial Differential Equations (Inv. PDE), Stochastic Differential Equations (SDE), Volterra Integro-Differential Equations (VIDE), Fractional Integro-Differential Equations (FIDE), Delay Differential Equations (DDE), Differential-Algebraic Equations (DAE), Fractional Differential-Algebraic Equations (Frac. DAE), Fractional Ordinary Differential Equations (Frac ODE), and Systems of Fractional Differential Equations (Sys. FDE).
Authors Year Problem type Domain Kernel
5c
Continued from previous page
Authors Year Problem type Domain Kernel
5rContinued on next page
2010 PDE Finite Standard Polynomial
2011 FIE Finite RBF
2012 ODE Finite RBF
2012 ODE Finite RBF
2012 VIE Finite RBF
2012 DAE Finite RBF
2013 ODE Finite RBF
2013 DDE Finite RBF
2014 Sys. ODE Finite RBF
2015 PDE Finite RBF
2016 PDE Finite PDE
2017 PDE Finite RBF
2018 PDE Finite Wavelet
2018 PDE Finite RBF
2018 PDE Finite RBF
2019 ODE/PDE Finite RBF
2019 PDE Finite Finite-Elements/RBF
2019 PDE Finite Wavelet
2019 ODE Finite RBF
2019 Inv. PDE Finite RBF
2020 ODE Finite RBF
2020 ODE Semi-Infinite Rational Gegenbauer
2021 FIE Finite Legendre
2021 Frac. VIDE Semi-Infinite Fractional Rational Legendre
2021 ODE Infinite Hermite
2021 VFIE Finite Legendre
2021 PDE Bounded RBF
2021 Frac. PDE Semi-infinite Laguerre
2022 Inv. PDE Finite Chebyshev
2022 ODE Semi-infinite Hermite
2022 ODE Finite Legendre
2022 SDE Finite Wavelet
2022 Frac. PDE Finite Bernstein
2022 VIE Finite Legendre
2023 DOFDE Finite Legendre
2023 PDE Finite Legendre
2023 ODE Semi-infinite Laguerre
2023 DDE Finite Legendre
2023 ODE Semi-infinite Fractional Rational Jacobi
2023 Sys. ODE Finite RBF
2023 Sys. IE Finite Legendre
2023 ODE Semi-infinite Rational Legendre
2023 ODE Finite Gegenbauer
2023 ODE Finite Genocchi wavelet
2023 Sys. FDE Finite Wavelet
2024 FIDE Finite Standard polynomial
2024 Frac. PDE Finite RBF
2024 VFIE Finite Standard Polynomial
2024 ODE Semi-Infinite Fractional Rational Chebyshev
2024 PDE Finite Bernstein
2024 Inv. PDE Finite Bernstein
2024 Frac ODE Finite Wavelet
2024 Frac. PDE Finite Legendre
2024 Frac. DAE Finite Legendre
§ BACKGROUND
In this section, we provide the necessary mathematical background for the subsequent sections, where we present our approach.
§.§ Gegenbauer polynomials
Gegenbauer polynomials, also known as ultraspherical polynomials, are a class of orthogonal polynomials that generalize the Legendre and Chebyshev polynomials. They are widely used in mathematical physics, particularly in the study of spherical harmonics and solutions to the Laplace equation in higher dimensions <cit.>. Gegenbauer polynomials have significant applications in mathematical physics, especially in problems involving spherical symmetry <cit.>. For example, they appear in the expansion of Green's function of the Laplace equation in spherical coordinates, as well as in the solution of the Helmholtz equation. The Gegenbauer polynomials C_n^(λ)(t) are expressed in terms of the hypergeometric function as:
C_n^(λ)(t) = (2λ)_n/n! _2F_1(-n, 2λ + n; λ + 1/2; 1-t/2),
where (a)_n denotes the Pochhammer symbol, representing the rising factorial. They can also be generated using a three-term recurrence relation:
(n+1) C_n+1^(λ)(t) = 2t(n+λ) C_n^(λ)(t) - (n+2λ-1) C_n-1^(λ)(t),
C_0^(λ)(t) = 1, C_1^(λ)(t) = 2λ t,
or an explicit formula:
C_n^(λ )(t)=∑ _k=0^⌊ n/2⌋(-1)^kΓ (n-k+λ )/Γ (λ )k!(n-2k)!(2t)^n-2k.
Using this formula, the first few Gegenbauer polynomials can be obtained as:
C_0^(λ)(t) = 1,
C_1^(λ)(t) = 2λ t,
C_2^(λ)(t) = (2 λ t^2+2 t^2-1) λ ,
C_3^(λ)(t) = 1/3 4 t λ(λ t^2+2 t^2-3/2) (λ +1),
C_4^(λ)(t) = 1/3 2 λ(3/4+(λ^2+5 λ +6) t^4+(-3 λ -6) t^2) (λ +1).
Gegenbauer polynomials are orthogonal with respect to the weight function (1-t^2)^λ-1/2 on the interval [-1, 1] for λ > -1/2. Specifically, they satisfy the following orthogonality relation:
∫_-1^1 (1-t^2)^λ-1/2 C_m^(λ)(t) C_n^(λ)(t) dt =
0 if m ≠ n,
π 2^1-2λΓ(n+2λ)/n!(n+λ)Γ(λ)^2 if m = n,
where Γ(λ) is the Gamma function given by Γ(z) = ∫_0^∞ t^z-1 e^-t dt. This orthogonality can be shifted to any finite domain [a,b] by applying an affine transformation of formula μ(t) = 2t-a-b/b-a to the input of Gegenbauer polynomials, i.e. G^(λ)(t) = C^(λ)(μ(t)).
The derivatives of Gegenbauer polynomials with respect to t are given by:
d/dt C_n^(λ)(t) = 2λ C_n-1^(λ+1)(t).
This relation can be used to derive further properties of the polynomials, particularly in applications involving ordinal, partial, and fractional differential equations. Additionally, the derivatives with respect to the parameter λ are also of interest:
∂/∂λ C_n^(λ)(t) = ∑_k=0^n-1 C_k^(λ)(t) 1/λ+k.
§.§ Fractional Calculus
Fractional calculus is a generalization of classical calculus to non-integer orders of differentiation and integration. It extends the concept of derivatives and integrals to arbitrary (real or complex) orders, providing powerful tools for modeling processes with memory and hereditary properties. In the following re recall some of the well-known definitions and formulas related to them, which will be used in the methodology.
The Riemann-Liouville integral is a fundamental concept in fractional calculus, defined for a function u(t) and a real number α > 0 as follows:
I_t^α u(t) = 1/Γ(α)∫_0^t (t - τ)^α - 1 u(τ) dτ,
where Γ(α) denotes the Gamma function. Similar to the integral operator, the Riemann-Liouville fractional derivative for α > 0 is defined as:
^RLD_t^α u(t) = 1/Γ(n - α)d^n/dt^n∫_0^t (t - τ)^n - α - 1 u(τ) dτ,
where n = ⌈α⌉ is the smallest integer greater than or equal to α. This derivative generalizes the classical derivative to non-integer orders <cit.>. However, this definition introduces complexities when modeling the initial values of differential equations. Therefore, we use the Caputo derivative definition, which facilitates a more straightforward interpretation of initial value problems. This derivative is defined as:
^C D_t^α u(t) = d^α/dt^αu(t) = 1/Γ(n - α)∫_0^t (t - τ)^n - α - 1d^n/dτ^n u(τ) dτ,
where n = ⌈α⌉. The Caputo fractional derivative has several important properties:
* Linearity: For functions u(t) and v(t), and constants a and b,
^C D_t^α[ a u(t) + b v(t) ] = a ^C D_t^α u(t) + b ^C D_t^α v(t).
* Initial Conditions: The Caputo derivative of a constant is zero:
^C D_t^α c = 0 for c ∈ℝ.
* Derivative of Polynomials: The Caputo derivative of polynomial u(t) = t^n is given by:
^C D_t^α t^n = Γ(n+1)/Γ(n-α+1) t^n-α.
* Special Cases: When α is an integer, the Caputo derivative reduces to the classical derivative:
^C D_t^n u(t) = d^n u(t)/dt^n.
§ THE PROPOSED APPROACH
In this section, we consider the following distributed-order fractional differential equation:
ψ[t, u(t), ^CD^η u(t)] = ρ(t) + ∫_a^bϕ(θ) d^θ/dt^θ u(t) dθ,
where ψ(·), ϕ(·), and ρ(·) are known functions, a, b ∈ℝ are the bounds of the integration, and u(t) is the unknown function. To approximate the solution to this problem, we consider a linear combination of some unknown weights and Gegenbauer polynomials:
û(t) = ∑_i=0^d-1𝐰_i G_i^(λ)(t),
where d is the number of basis functions, w_i are the unknown weights, and C_i^(λ)(t) are the Gegenbauer polynomials. In the case of a partial DOFDE with two independent variables, this expansion takes the form:
û(x,t) = ∑_i=0^d_x-1∑_j=0^d_t-1𝐰_i,j G_i^(λ)(x) G_j^(λ)(t).
However, by vectorization of matrix 𝐰∈ℝ^d_x-1× d_t-1 arrangement of shifted Gegenbauer polynomials, one can rewrite the two-dimensional approximation in the form of Equation (<ref>). In either case, we formulate the following optimization problem using the Least-Squares Support Vector Regression framework <cit.>:
min_w,e 1/2𝐰^T 𝐰 + γ/2𝐞^T 𝐞
subject to ψ[t_i, û(t_i), d^η/dt^ηû(t_i)] - ρ(t_i)
- ∫_a^bϕ(θ) d^θ/dt^θû(t_i) dθ = 𝐞_i, i = 1, …, N,
where N is the number of training points, γ is a regularization parameter, and 𝐞_i represents the residual error terms.
In numerical simulations, computing the analytical integration can be challenging. To mitigate this issue, we first approximate the integral using the accurate Gauss-Legendre quadrature of order Q, which converts the integral into a finite summation:
∫_a^bϕ(θ) d^θ/dt^θû(t_i) dθ≈b-a/2∑_j=0^Qω_j ϕ(θ̂_j) d^θ̂_j/dt^θ̂_jû(t_i),
where the nodes θ̂_j are given by:
θ̂_j = b-a/2θ_j + a+b/2,
in which θ_j are the roots of Legendre polynomial G^(0) and ω_j are the Gauss-Legendre weights corresponding to the nodes θ_j given by:
ω_j =2/(1-x_j^2)[G^(0)'_Q(x_j)]^2.
Employing this technique helps us reduce the computational complexity of evaluating an analytical integral to a finite summation. Moreover, this method allows us to utilize the Caputo fractional derivative property of polynomials, as given in Equation (<ref>), along with the linearity of the approximation:
d^θ̂_j/dt^θ̂_jû(t) = ∑_i=0^d-1 w_i d^θ̂_j/dt^θ̂_j C_i^(λ)(t)
=∑_i=0^d-1 w_i d^θ̂_j/dt^θ̂_j[∑ _k=0^⌊ i/2⌋(-1)^kΓ (i-k+λ )/Γ (λ )k!(i-2k)!(2t)^i-2k]
= ∑_i=0^d-1 w_i ∑ _k=0^⌊ i/2⌋(-1)^kΓ (i-k+λ )/Γ (λ )k!(i-2k)!d^θ̂_j/dt^θ̂_j[(2t)^i-2k]
= ∑_i=0^d-1 w_i ∑ _k=0^⌊ i/2⌋(-1)^k2^i-2kΓ (i-k+λ )/Γ (λ )k!(i-2k)!Γ(i-2k+1)/Γ(i-2k-α+1) t^i-2k-α,
which facilitates fast computation of the derivatives of the unknown function. Combining all these, the optimization problem can be reformulated as:
min_w 1/2w^T w + γ/2e^T e,
subject to:
ψ[t_i, û(t_i), d^η/dt^ηû(t_i)] - ρ(t_i) - b-a/2∑_j=0^Iω_j ϕ(θ̂_j) d^θ̂_j/dt^θ̂_jû(t_i) = e_i, i = 1, …, N,
for N training points. This quadratic optimization problem can be reformulated in the dual space. To achieve this, we first construct the Lagrangian function:
𝔏(w, e, β) = 1/2w^T w + γ/2e^T e
+ ∑_i=1^Nβ_i (ψ[t_i, û(t_i), d^η/dt^ηû(t_i)] - ρ(t_i) - b-a/2∑_j=0^Iω_j ϕ(θ̂_j) d^θ̂_j/dt^θ̂_jû(t_i) - e_i),
where β_i are the Lagrange multipliers associated with the constraints. Next, we derive the Karush-Kuhn-Tucker (K.K.T.) conditions, which yield:
[Z^T Z + 1/γI] β = ρ,
where ρ_i = ρ(t_i) and the elements of matrix Z is defined as:
Z_i,j = ψ[t_i, û(t_i), d^η/dt^ηû(t_i)] - ρ(t_i) - b-a/2∑_k=0^Iω_k ϕ(θ̂_k) d^θ̂_k/dt^θ̂_k C_j^(λ)(t_i),
with the quadrature points θ̂_k and weights ω_k determined by the Gauss-Legendre quadrature method. By solving this positive definite system of equations, the unknown In the dual space, the Lagrangian multipliers β are fixed and then the approximation can be expressed as û = β^TZ^T 𝐆(t) which gives the solution in terms of Gegenbauer kernel function:
û(t) = ∑_i=1^N β_i ℒK(t, t_i),
where ℒ is the given problem in operator form and K(t,t_i) is defined as
K(t,t_i) = ∑_j=0^d G^(λ)_j(t) G^(λ)_j(t_i).
§ NUMERICAL EXAMPLES
In this section, we simulate some DOFDEs using the proposed approach. The problems are chosen to ensure their analytical solutions cover various function spaces, including polynomials, fractional functions, and one example with no known exact solution. All experiments are implemented using Maple Mathematical Software and run on a personal computer with an Intel Core i3-10100F processor and 16 GB of RAM.
§.§ Ordinal DOFDEs
In this section, we examine two DOFDEs in one-dimensional space. For all of the problems in this section, we consider Gaussian quadrature discretization Q=10.
The following distributed differential equation problem, with the analytical solution u(t) = t^2 and initial conditions u(0) = u'(0) = 0, is discussed in <cit.>. The problem is given by:
∫_0.2^1.5Γ(3-θ) ^CD^θ u(t) dθ = 2 t^1.8 - t^0.5/ln t,
where D^θ denotes the distributed-order fractional derivative of u(t) with respect to t, and Γ represents the Gamma function. We solve this problem using the proposed LSSVR approach in the domain t ∈ [0.2,1.5] with d=4, and N=4. To determine the optimal value for γ, we conducted a hyperparameter analysis following the approach outlined in <cit.>. A random search algorithm was employed to optimize λ, aiming to minimize the residual error. The results of this sensitivity analysis are presented in Figure <ref>, which indicates that λ has minimal influence on the solution. Therefore, we select a simplified value, such as 0 or 1/2, for ease of formulation. The simulation results with this hyperparameter choice are displayed in Figure <ref>, demonstrating excellent accuracy, particularly because the exact solution is a polynomial, consistent with the basis functions used.
For the second example, we consider the following DOFDE <cit.>:
∫_0^1 6θ (1-θ) D^θ u(t) dθ + 1/10u(t) = 0,
with the initial condition u(0) = 1. This problem does not have an exact solution in the time-domain space. Therefore, after simulating this problem with γ = 10^12 and t∈[0,1], we report the simulated results for different numbers of basis functions in Table <ref>. The residual function of the obtained solution, along with the learned approximation with d = N = 20, is shown in Figure <ref>. The simulated results shows a good agreement with previous works <cit.>
§.§ Partial DOFDEs
In this section, we consider DOFDEs with unknown solutions that depend on two independent variables. For all of the problems in this section, we consider Gaussian quadrature discretization Q=7.
Consider a distributed fractional partial differential equation in the form:
∫_0^1Γ(3-θ) ∂^θ u/∂ t^θ(x, t) dθ = ∂^2 u/∂ x^2(x, t) + 2 t^2 + 2 t x (t-1)(2-x)/ln t,
with initial and boundary conditions:
u(x, 0) = 0, 0 < x < 2,
u(0, t) = u(2, t) = 0, 0 < t ≤ 1.
This problem has the analytical solution u(x, t) = t^2 x (2-x) <cit.>. We simulate this problem using d_x = d_t = 3 basis functions and 9 training points in the problem domain, which are obtained from the roots of the basis functions. The simulation result for this problem is depicted in Figure <ref>.
For the final experiment, we consider the following partial DOFDE <cit.>:
∫_0^1Γ(3.5-θ) ∂^θ u/∂ t^θ(x, t) dθ = ∂^2 u/∂ x^2(x, t) + u(x,t)^2+15 √(π)(t-1) t^3/2/8ln(t) x (x-1)-2t^5/2-t^5x^2(x-1)^2,
with the exact solution u(x,t) = t^2 √(t) x (x - 1), which yields the initial and boundary conditions:
u(x, 0) = 0, 0 < x < 1,
u(0, t) = u(1, t) = 0, 0 < t ≤ 1.
Using the proposed approach to solve this problem, we employed d_x=3, d_t=15 with N=45 training points. The approximated solution is depicted in Figure <ref>. Table <ref> also reports the approximated solution at specific points in the problem domain.
§ CONCLUSION
In this study, we developed a physics-informed machine learning approach for numerically solving distributed-order fractional differential equations. We specifically tailored the Least Squares Support Vector Regression algorithm to capture the intricate dynamics of these problems. By employing Gegenbauer polynomials as the kernel function, the LSSVR was optimized to deliver precise predictions of the unknown solutions. We also demonstrated the effectiveness of the Gaussian quadrature for approximating the integral components of the equations and leveraged the properties of the Caputo derivative to enhance computational efficiency.
Our numerical experiments included two ordinary and two partial DOFDEs, where the proposed framework successfully approximated their solutions. For problems with known exact solutions, our method showed high accuracy, while for problems without exact solutions, we provided the simulated results obtained from LSSVR which are in good agreement with previous works. Future research could explore the use of alternative kernel functions or generalize the approach with fractional basis functions. Additionally, integrating advanced hyperparameter optimization techniques to fine-tune model parameters presents a promising direction for further study.
1.0
customstyle
|
http://arxiv.org/abs/2409.02210v1 | 20240903182355 | Scalar radiation zeros at the LHC | [
"Christoph Englert",
"Andrei Lazanu",
"Peter Millington"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
Equivariant Poincaré duality for cyclic groups of prime order and the Nielsen realisation problem
Kaif [email protected] Dominik [email protected] Christian [email protected]
September 9, 2024
======================================================================================================================
§ INTRODUCTION
New physics searches at the Large Hadron Collider (LHC) are well underway, but have so far not revealed any concrete signs for physics beyond the Standard Model (BSM). Interpreted from a perspective of large mass gaps between the Standard Model (SM) spectrum and states of its ultraviolet (UV) completion, effective field theory (EFT) methods have seen increasing applications to the phenomenology of hadron collisions, in particular when interpreting results from the LHC. In parallel, the LHC programme needs to safeguard itself from the implicit assumptions that inform EFT approaches. Searches for concrete UV models remain a pillar of the particle-physics programme. Steering away from a priori renormalisable model correlations, signature-driven new physics proposals provide a relevant alternative avenue to pivot and add value to the LHC (and future collider) programme. Most efforts along these lines, so far, have drawn on dark matter, long-lived particles and emerging signature scenarios (see e.g., Ref. <cit.>).
Some of these scenarios are inspired by interactions rooted in cosmological observations <cit.> such as the accelerated expansion of the Universe and the unexplained nature of dark energy.
There, apart from the standard cosmological constant scenario, relying on fine-tuning, significant research has focused on models of scalar fields coupled to gravity. These include Horndeski <cit.>, beyond-Horndeski <cit.> and Degenerate Higher Order Scalar-Tensor (DHOST) <cit.> theories, which produce second order equations of motion for the scalar field, avoiding the introduction of ghost instabilities. These scalars are expected to couple to the matter fields of the Standard Model. These couplings can lead to potential signatures in particle colliders that might be detected by current and future colliders <cit.>.
In this work, we explore a specific and novel avenue, inspired by these scalar-tensor theories of gravity. Concretely, we identify a subclass of scalar field theory interactions that predominantly manifest themselves through off-shell contributions to physical scattering contributions. Phenomenologically, the associated signatures give rise to a distinct production and decay pattern of the new scalars. Most notably, at leading order, any 1→ 2 and 2→ 2 amplitudes involving the scalar and SM matter are identically zero. This is somewhat reminiscent of the well-known SM radiation zeros in, e.g., gauge-boson pair production (for a review, see Ref. <cit.>), but generalises this phenomenon from the phase space to multi-particle multiplicities by moving away from internal gauge symmetries to specific source terms of the SM Lorentz symmetry currents. The production of such states is then driven by off-shell contributions of Standard Model scattering amplitudes dressed with additional scalar interactions. Through crossing symmetry, the dominant decay proceeds via four-body decay in stark contrast to any other signature-driven analysis that is currently pursued at the LHC. As production and decay straddle differences in parton luminosity at the LHC, the a priori sensitivity range of displaced-vertex and missing-energy searches is considerably widened.
This work is organised as follows: In Sec. <ref>, we introduce the relevant interactions for a detailed discussion of on-shell zeros in Sec. <ref> using a instructive toy example. (We also comment on aspects of higher-order corrections.) We demonstrate how these restrictions are relaxed through off-shell contributions in 2→ 3 amplitudes, thereby opening up production and detection possibilities at the LHC. The latter are investigated in Sec. <ref>, which is devoted to recasting existing and representative searches. We conclude in Sec. <ref>.
§ OFF-SHELL MATTER COUPLINGS
To suppress low-order processes involving singlet scalars in extensions of the SM or Einstein gravity, we consider a class of models in which the scalar ϕ couples only to the divergence of the SM energy-momentum tensor T_ SM^μν. The matter couplings then take the generic form
ℒ⊃(∂_μ∂_νT_ SM^μν)f(ϕ,∂ϕ,…) ,
where f(ϕ,∂ϕ,…) is a function of the scalar field and its derivatives. If the Standard Model energy-momentum tensor is conserved on-shell, ∂_μT^μν=0, the scalar ϕ can then couple only to off-shell Standard Model degrees of freedom. As we will see in Sec. <ref>, this precludes t-channel exchanges of the scalar ϕ between Standard Model fermions, avoiding stringent constraints on fifth forces <cit.>.
The second divergence of the energy-momentum tensor ∂_μ∂_νT_ SM^μν is a dimension-6 operator, and the lowest-order matter coupling possible is therefore
ℒ⊃ -C/M^3T_ SM^μν∂_μ∂_νϕ ,
where C is a dimensionless constant and M is a mass scale. Equation (<ref>) is commonly referred to as longitudinal coupling <cit.>, to be contrasted with the conformal A(ϕ)[T_ SM]_μ^μ and the disformal coupling B(ϕ)∂_μϕ∂_νϕ T^μν_ SM <cit.>.
We note that, in the presence of a spacetime-varying background field φ, disformal couplings can also generate an operator
ℒ⊃ -C'/2M^3T^μν_SMv_{μ∂_ν}ϕ ,
where v_μ=∂_μφ/M and the curly braces indicate symmetrization of the Lorentz indices, i.e., a^{μb^ν} = a^μ b^ν + a^ν b^μ. In Friedmann–Lemaître–Robertson–Walker spacetime, one might expect v_μ=δ_μ^0v_0, such that the coupling takes the form
ℒ⊃ -C'/2M^3v_0T^{0μ}_SM∂_μϕ .
This Lorentz-violating coupling will be heavily suppressed for a background field that is varying on cosmological timescales. Signatures of Lorentz symmetry violation are investigated at the LHC, see, e.g., the recent analysis targeting a modulation of experimental measurements as a function of sidereal time <cit.>. These signatures are qualitatively different from “standard” LHC searches fundamentally rooted in Lorentz covariance. Equations (<ref>) and (<ref>) also relate to different aspects of the underlying theory. In this work, we will therefore focus on the interactions of Eq. (<ref>), leaving a discussion of Lorentz-violating signatures for future work.
In what follows, we assume that the scalar ϕ is canonically normalised, with mass m_ϕ, and vanishing self-interactions. Notice that, in the massless limit m_ϕ→0, the Lagrangian of the scalar and its matter couplings are shift-symmetric.
§ TREE-LEVEL CORRECTIONS TO MASSIVE QED
It is clear from the vanishing of the divergence of the on-shell energy-momentum tensor that the lowest-order tree-level scatterings involving the coupling of ϕ to on-shell external states will be zero. Nevertheless, it is illustrative to consider a concrete example. To this end, we focus on the QED Lagrangian for a massive photon
ℒ
=
ψ̅(
i/2D
-
m
)
ψ
-
1/4 F_μν F^μν
+
1/2 m_A^2 A_μ A^μ
-
1/2ξ (∂_μ A^μ)^2
.
Here, ψ is a Dirac fermion of mass m, and A^μ is a Proca field of mass m_A, with field-strength tensor F_μν=∂_μA_ν-∂_νA_μ. The gauge covariant derivative has been written in the form
D_μ=∂_μ-2ie A_μ , f ∂_μ g = f (∂_μ g) - (∂_μ f) g ,
such that the fermion kinetic term is antisymmetrised. The corresponding energy-momentum tensor is <cit.>
T^μν
=
i/4ψ̅γ^{μD^ν}ψ
+ F^μσ F_σ^σν
+ m_A^2 A^μ A^ν
- 1/ξ (∂· A) ∂^{μ A^ν}
-
η^μνℒ .
The couplings (<ref>) and (<ref>) lead to the following vertices, wherein all momenta are defined pointing into the vertex (e.g., p_1+p_2+k=0 in the first expression):
[c]3 cm
< g r a p h i c s >
= iC/M^3(p_1+p_2)·[p_1(p_2+m)-p_2(p_1-m)],
[c]3 cm
< g r a p h i c s >
= -C'/4M^3[v(p_1^2-p_2^2).
.-v·(p_1+3p_2)(p_1-m)+v·(p_2+3p_1)(p_2+m)],
[c]3 cm
< g r a p h i c s >
= ieC/M^3(kk_μ-γ_μk^2),
[c]3 cm
< g r a p h i c s >
= -eC'/2M^3(vk_μ+kv_μ-2γ_μv· k),
[c]3.1 cm
< g r a p h i c s >
= iC/M^3{[2(q_1· k)(q_2· k) -k^2(q_1· q_2+m_A^2)]η^α_1α_2.
+(q_1· q_2+m_A^2)k^α_1k^α_2-2(q_1· k)q_2^α_1k^α_2-2(q_2· k)k^α_1q_1^α_2
.-ξ^-1[q_1^α_1q_2^α_2k^2-2(q_1· k)k^α_1q_2^α_2-2(q_2· k)q_1^α_1k^α_2]}
= iC/M^3{[(q_1^2+q_2^2-m_A^2)(q_1· q_2)-2q_1^2q_2^2]η^α_1α_2+2(q_1· q_2)q_1^α_1q_2^α_2.
-q_1^2q_2^α_1(q_1+2q_2)^α_2-q_2^2(q_2+2q_1)^α_1q_1^α_2
+2m_A^2(q_1+q_2)^α_1(q_1+q_2)^α_2+ξ^-1[q_1^2(q_1+2q_2)^α_1q_2^α_2.
..+q_2^2q_1^α_1(q_2+2q_1)^α_2+2(q_1· q_2)(q_1^α_1q_1^α_2+q_2^α_1q_2^α_2)]},
[c]3.1 cm
< g r a p h i c s >
= -C'/2M^3{[2(q_1· v)(q_2· k)+2(q_1· k)(q_2· v) ..
.-2(v· k)(q_1· q_2+m_A^2)]η^α_1α_2-2(q_1· k)q_2^α_1v^α_2-2(q_1· v)q_2^α_1k^α_2
-2(q_2· v)k^α_1q_1^α_2-2(q_2· k)v^α_1q_1^α_2
+(q_1· q_2+m_A^2)(v^α_1k^α_2+k^α_1v^α_2)-2ξ^-1[q_1^α_1q_2^α_2(v· k).
..-(q_1· v)k^α_1q_2^α_2-(q_1· k)v^α_1q_2^α_2-(q_2· v)q_1^α_1k^α_2-(q_2· k)q_1^α_1v^α_2]}.
We have made use of energy-momentum conservation to eliminate the momentum of the ϕ field in all but Eqs. (<ref>), (<ref>) and (<ref>). Crosses indicate insertion of the constant background vector in the Lorentz-violating vertices. It is readily confirmed that Eqs. (<ref>), (<ref>) and (<ref>) reduce to Eqs. (<ref>), (<ref>) and (<ref>), respectively, in the limit v_μ→ -ik_μ, as we would expect.
On-shell: The fermion vertices (<ref>) and (<ref>) vanish identically when the fermion four-momenta p_1 and p_2 are on-shell, i.e., p_1^2=p_2^2=m^2, after multiplying from the right by the four-spinor u(𝐩_1,s_1) and the left by the four-spinor v̅(𝐩_2,s_2), and making use of the Dirac equations
(p-m)u(𝐩,s)=0 ,
v̅(𝐩,s)(p+m)=0 .
This immediately precludes tree-level t-channel exchanges of the scalar ϕ (see Fig. <ref>) that could give rise to long-range fifth forces.
We notice that the four-point fermion-fermion-vector-scalar vertex does not vanish on-shell. However, this vertex is order eC, and we should expect that this cancels against other contributions at order eC on-shell. An example is shown in Figure <ref>. The four contributions to the matrix element are:
iℳ_(i) =v(𝐩_2,s_2)ieC/M^3{q(p_1+p_2)_μ
=-2[p_1· p_2+q·(p_1+p_2)+m^2]γ_μ}ϵ^μ*(q)u(𝐩_1,s_1) ,
iℳ_(ii) =v(𝐩_2,s_2)ieC/M^3(p_1· p_2+q· p_2+m^2)γ_μϵ^μ*(q)u(𝐩_1,s_1) ,
iℳ_(iii) =v(𝐩_2,s_2)ieC/M^3(p_1· p_2+q· p_1+m^2)γ_μϵ^μ*(q)u(𝐩_1,s_1) ,
iℳ_(iv) =v(𝐩_2,s_2)ieC/M^3[q·(p_1+p_2)γ_μ-q(p_1+p_2)_μ]ϵ^μ*(q)u(𝐩_1,s_1) ,
and we can readily confirm that these sum to zero. This illustrates the delicate cancellations that can occur order by order when there is an underlying symmetry, in this case the conservation of the energy-momentum tensor.
Off-shell: Continuing with another example from QED, we consider the order e^2C contributions to the t-channel and u-channel electron-electron scatterings, dressed by a single emission of the scalar, as shown in Fig. <ref>. In all amplitudes contributing to this 2→3 process, the singlet scalar ϕ couples either to at least one off-shell state [Fig. <ref> (v)–(xiv)], or through the four-point fermion-fermion-vector-scalar coupling [Fig. <ref> (i)–(iv)], which does not vanish (individually) for on-shell states.
Unlike the 2→ 2 process above, this sum of amplitudes is non-vanishing, and we see that vertices involving the singlet scalar have the interesting property that they only dress processes involving off-shell SM particles, as is expected from the otherwise vanishing divergence of the energy-momentum tensor on-shell. As we see here, this is of particular relevance to higher multiplicity events, but also loop-level processes.
§ CONSTRAINTS FROM LHC MONO-JET ANALYSES
Having reviewed the phenomenological properties of the considered scenario qualitatively above, we now turn to a quantitative discussion within the context of the ongoing LHC physics programme. The absence of direct two- and three-body decays for the scalar results in a phenomenologically distinct and non-common phenomenology of the state. Its decay in four-body decays directly probes the virtuality of the SM matter that is involved in its decay. Compared to other scenarios of long-lived states (such as, e.g., R-hadrons), the decay is characterised by a comparably larger phase-space suppression. Therefore, a wider mass range opens up in which the state is stable on collider length scales, as we will discuss below. In this region, it becomes a relevant target of mono-signature analyses. These signatures have been studied by the LHC multi-purpose experiments ATLAS <cit.> and CMS <cit.>. Out of all mono-signature channels, mono-jet final states are probed with the highest statistical abundance, and we will focus on these in the following. In these searches, both experiments pursue similar strategies of tagging energetic jets recoiling against “nothing” and giving rise to a large missing transverse momentum. As QCD radiation is abundant in hadron colliders at large momentum transfers, there is typically large additional jet activity present in such an event. In order to reduce the contamination from jet energy uncertainty, a typical criterion that is invoked in mono-jet analyses is a significant separation of the recoil system from any other jet in the event.
In the following, we use the basic selection criteria of Ref. <cit.> as a proxy of mono-jet analyses. More specifically, we will employ the inclusive search region of Ref. <cit.>, dubbed `IM0', which is determined by at least one hard jet with
p_T,j > 150 GeV
in |η_j|<4.5. (Jets are clustered with the anti-kT algorithm with resolution R=0.4 and transverse momentum p_T≥ 30 GeV.) The IM0 region is characterised by a recoil transverse momentum
p_T,rec > 200 GeV .
Furthermore, the recoil system needs to be sufficiently removed in the azimuthal angle-pseudorapidity plane by Δ R > 0.6 (0.4) for p_T,rec < 250 GeV (p_T,rec≥ 250 GeV).
We employ <cit.> interfaced with a UFO model file <cit.>, generated with FeynRules <cit.> and FeynMG <cit.>. Owing to the specific phenomenology associated with C/M^3, the first non-trivial hard matrix element to provide a non-zero production cross section is pp →ϕ j j,[Note that other mono-signature searches, such as mono-photon or mono-Z production would necessarily rely on additional jet activity to generate virtuality-driven cross sections. Albeit experimentally cleaner compared to mono-jet searches, these would need to proceed with significant additional jet activity that is typically vetoed in associated searches <cit.> to remove SM backgrounds.] which has been extensively cross checked analytically, numerically, and through the interface to the FeynArts <cit.> suite. Events are converted to fully hadronised final states using Pythia8 <cit.>. Reflecting the cut flow of Ref. <cit.> on the fully hadronised final states, we can set limits on the IM0 fiducial region; the cross section for this region and a reference value of C/M^3 are shown in Fig. <ref>. Using the observed, expected and ±1σ regions of Ref. <cit.>, we can translate this cross section into a constraint on C/M^3 as a function of the ϕ mass when treating this particle as stable (see Fig. <ref>).
As we have already remarked earlier, the non-zero production of ϕ is also linked to a finite lifetime through four-body decays. We can therefore use the results of Fig. <ref> to identify the mass region for which the ϕ scalar is stable on LHC length scales. The decay width of a particle Γ can be linked to the decay length in the LHC lab frame via
d= β√(1-β^2) 1Γ .
If d is comparable to the size of the ATLAS tracker, i.e., d≤ 1 m, the scalar decays before reaching the detector, rendering mono-signature (or displaced-vertex searches) insensitive. On the other hand, if d is large compared to the size of the ATLAS detector itself d≳ 15 m, the particle will escape detection and the mono-jet signature discussed above will be an appropriate selection. Including the exclusion values of C/M^3 from Fig. <ref>, we can analyse the decay length for the respective values of m_ϕ and C/M^3 to identify this region. The result is shown in Fig. <ref>. For mass scales m_ϕ≲10 GeV, the scalar is sufficiently stable to escape detection. In this region, as m_ϕ≪ p_T,rec, the LHC exclusion determined by the inclusive production of ϕ is flat as a function of m_ϕ. The LHC sensitivity can therefore be estimated as
C/M^3 < 4× 10^-4 GeV^-3 for m_ϕ≲ 10 GeV .
In the mass range that indicates a decay length shorter than the tracker, the selection criteria of IM0 (and any missing-energy selection) no longer apply, and the signal stands in competition with a huge (and relatively poorly modelled) QCD multi-jet background. In the intermediate, where the decay length interpolates the different ATLAS detector regions at a reasonable cross section, displaced-vertex <cit.> and emergent signatures <cit.> become another avenue for detection. However, sensitivity can only be gained in a very narrow parameter window. For instance, sticking to the hard jet criterion of IM0, the requirement of a production cross section of 1 fb with a decay length within the ATLAS experiment equates to a very small mass window of 24 GeV≲ m_ϕ≲ 30 GeV. Whilst being very narrow, the decay pattern into multi-pronged decay structures compared to the shower profile of emergent jets <cit.> could indeed provide an avenue to discriminate between different BSM scenarios if such an exotic discovery is made in the future.
§ CONCLUSIONS
We have analysed a particular derivative coupling of a singlet scalar field to the energy-momentum tensor of the SM degrees of freedom. By virtue of the vanishing of the divergence of the SM energy-momentum tensor on-shell, the singlet scalar field couples only to off-shell states. As a result, standard, tree-level fifth-forces are absent, and low-order or low-multiplicity tree-level processes are unaffected. Moreover, this coupling probes the virtuality of the process, leading to a unique phenomenology. This has been illustrated here in the context of mono-jet analyses, where the additional phase-space suppression for the leading four-body decay of the singlet scalar enlarges the parameter space over which the scalar is stable on the scale of the existing multipurpose LHC experiments. By identifying the mass window for which the singlet scalar is stable on these scales, we identify the limits of the sensitivity of mono-signature or displaced vertex searches, which otherwise allow to constrain the strength of the singlet scalar coupling to the SM energy-momentum tensor over this mass window.
We have also identified a class of comparable Lorentz-violating couplings, which also involve only off-shell states. These can arise from disformal couplings to the SM energy-momentum tensor in the case of slowly evolving background fields, as are common in cosmological scenarios. We leave the study of these operators, and the implications of off-shell-only couplings for loop-level processes to future work.
All Feynman diagrams presented in this work have been produced using FeynArts <cit.> and FeynEdit <cit.>. The authors thank Nicolas Chanon, Scott Melville and Sergio Sevillano Muñoz for helpful discussions. This work was supported by the Science and Technology Facilities Council (STFC) [Grant No. ST/X00077X/1] and a United Kingdom Research and Innovation (UKRI) Future Leaders Fellowship [Grant No. MR/V021974/2]. C.E. is supported by the STFC [Grant No. ST/X000605/1], the Leverhulme Trust [Research Project Grant RPG-2021-031], and the Durham Institute for Particle Physics Phenomenology (IPPP) scheme.
§ DATA ACCESS STATEMENT
The analysis presented in this work made use of the following publicly available codes: <cit.>, <cit.>, <cit.>, <cit.>.
|
http://arxiv.org/abs/2409.02870v1 | 20240904165417 | Constraints for twist-two alien operators in QCD | [
"G. Falcioni",
"F. Herzog",
"S. Moch",
"S. Van Thurenhout"
] | hep-ph | [
"hep-ph",
"hep-th"
] |
T> c
M>c <23.0cm 18.5cm
-0.9cm -0.9cm
-1.5cm 4colour#1#1g_s(#1)α_s^ #1g_s^2equationsection
ZU-TH 43/24 August 2024
DESY-24-108
Constraints for twist-two alien operators in QCD
G. Falcioni^ a,b,
F. Herzog^ c,
S. Moch^ d and
S. Van Thurenhout^ e
^a Dipartimento di Fisica, Università di Torino, Via Pietro Giuria 1, 10125 Torino, Italy
^b Physik-Institut, Universität Zürich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
^c Higgs Centre for Theoretical Physics, School of Physics and Astronomy,
The University of Edinburgh, Edinburgh EH9 3FD, Scotland, UK
^dII. Institute for Theoretical Physics, Hamburg University
Luruper Chaussee 149, D-22761 Hamburg, Germany
^e HUN-REN Wigner Research Centre for Physics, Konkoly-Thege Miklós u. 29-33, 1121
Budapest, Hungary
Abstract
Parton evolution equations in QCD are controlled by the anomalous dimensions of gauge-invariant twist-two spin-N quark and gluon operators.
Under renormalization, these mix with gauge-variant operators of the same quantum numbers, referred to as alien operators.
Our work addresses the systematic study of these alien operators at arbitrary spin N, using generalized BRST symmetry relations to derive their couplings and Feynman rules at all values of N.
We observe how the all-N structure of the generalized (anti-)BRST constraints relates the couplings of alien operators with n+1 gluons to those with n gluons.
Realizing a bootstrap, we present all one-loop results necessary for performing the operator renormalization up to four loops in QCD.
§ INTRODUCTION
The study of twist-two operators of spin-N for quarks and gluons in quantum chromodynamics (QCD) and their renormalization dates to the origins of QCD as the gauge theory of the strong interaction <cit.>.
The renormalization of off-shell operator matrix elements (OMEs) in QCD, i.e. Green's functions with off-shell external momenta and insertions of these quark and gluon operators, gives access to their anomalous dimensions.
These coincide with the Mellin transforms of the standard QCD splitting functions, that govern the scale evolution of the parton distribution functions.
It is well-known that the twist-two operators of spin-N mix under renormalization
with a set of gauge-variant operators of the same quantum numbers, which
involve equation-of-motion (EOM) and ghost operators.
The latter, often referred to in summary as alien operators, can be constructed systematically, by employing a generalized gauge symmetry of the QCD Lagrangian in covariant gauge with the addition of the physical quark and gluon operators <cit.>.
The generalized gauge symmetry can be promoted to a generalized BRST (gBRST) symmetry <cit.>.
This provides an algebraic approach for the derivation of a complete set of operators to be considered in the renormalization of the off-shell OMEs at a given loop order in perturbative QCD in an expansion in the strong coupling g_s, α_s=g_s^2/(4π). The complete set of operators required up to four loops has been listed in <cit.>.
Each alien operator features a coupling constant that can be interpreted as the renormalization constant that generates mixing of the gauge-invariant operators into each alien. In order to renormalize the physical OMEs, these coupling constants must be computed order-by-order in perturbation theory. The required couplings to renormalize the two-loop OMEs were computed in <cit.> in closed form for all values of N. A method to determine the alien counterterms, i.e. the Feynman rules obtained by summing all the alien operators with their associated couplings, was presented in <cit.> together with results up to the three-loop level for a covariant gauge and all values of N. From this, the n_f^2 contributions to the pure-singlet splitting functions were obtained at four loops <cit.>. Beyond three loops, ref. <cit.> determined a set of all-order constraints on the couplings, induced by gBRST and generalized anti-BRST symmetries <cit.>. In <cit.>, these constraints were solved at arbitrary loop order for fixed N≤20, leaving the systematic study of the alien operators at arbitrary spin N as an open problem. In this paper, we follow a different strategy. Namely, we will solve the constraints on the alien couplings to leading order in g_s but for all values of N. The main results of our study are:
* The all-N structure of the couplings is fixed in terms of a small set of constants. The latter can be determined by explicitly computing the couplings for some fixed values of N.
* The structure of the couplings of alien operators with n+1 gluons is related to the ones with n gluons, allowing for a bootstrap in the determination of complicated higher-order couplings in terms of simpler lower-order ones.
The outline of the article is as follows.
In Sec. <ref> we set the stage, review the generalized gauge symmetry and provide a brief summary of the set of relevant alien operators. In Sec. <ref> we study the identities that exist among the couplings of the alien operators and show how they can be used to restrict the all-N structure of the couplings. The results of this analysis are then used in Sec. <ref> to derive the Feynman rules of the alien operators, suitable for the renormalization of OMEs at all N up to four loops in QCD. Finally, in Sec. <ref>, we summarize our findings and provide an outlook on further developments.
§ SETTING THE STAGE
In this section, we review the construction of the alien operators and summarize our conventions. The complete gauge-fixed QCD action is written as
S=
∫d^Dx (ℒ_0+ℒ_GF+G) .
Here ℒ_0 represents the classical part of the QCD Lagrangian
ℒ_0 = -1/4 F^μν_a F_μν^a + ∑_f=1^n_fψ^f(iD-m_f)ψ^f ,
with the field strength defined as
F_μν^a = ∂_μ A_ν^a - ∂_ν A_μ^a + g_s f^abc A_μ^bA_ν^c .
f^abc are the standard QCD structure constants. The covariant derivative in Eq. (<ref>) is D= γ^μ(∂_μ-ig_s T^aA^a_μ) with T^a the generator of the gauge group in the fundamental representation. The gauge-fixing and ghost terms are
ℒ_GF+G=-1/2ξ(∂^μ A^a_μ)^2-c^a ∂^μ D_μ^ab c^b
with ξ the covariant gauge parameter and c^a and c^a the anti-ghost and ghost fields, respectively. The covariant derivative in the adjoint representation is D_μ^ac=∂_μδ^ac+g_s f^abcA_μ^b. The QCD Lagrangian can be extended to also include spin-N gauge-invariant operators of twist two, which we define as
_ g^(N)(x) = 1/2Tr[F_ν(x) D^N-2 F^ν(x)] ,
_ q^(N)(x) = 1/2Tr[ψ(x)Δ D^N-1ψ(x)] .
Here Δ_μ is a lightlike vector and we introduced the notation
F^μ;a=Δ_ν F^μν;a, A^a=Δ_μ A^μ;a, D=Δ_μ D^μ, ∂=Δ_μ∂^μ .
Under renormalization the operators in Eq. (<ref>) mix with operators proportional to the (classical) EOM and with BRST-exact operators <cit.>. Following <cit.>, we begin by presenting the EOM aliens in the form
_EOM^(N)=(D· F^a +g_sψT^aΔψ) 𝒢^a(A^a,∂ A^a,∂^2 A^a,... )
with D· F^a = D_ν F^ν;a and 𝒢^a a generic local function of the gauge field and its derivatives. It is convenient to expand 𝒢^a in a series of contributions with an increasing number of gauge fields. This leads to
_^(N) = _^(N),I+_^(N),II+_^(N),III+_^(N),IV+ …
with
_EOM^(N),I =
η(N) (D· F^a + g_s ψ Δ T^a
ψ) (∂^ N-2A^a ),
_EOM^(N),II =
(D· F^a + ψ Δ
T^a ψ) ∑_i+j
=N-3C_ij^abc(∂^iA^b)(∂^jA^c),
_EOM^(N),III =
(D· F^a + ψΔ T^aψ) ∑_i+j+k
=N-4C_ijk^abcd(∂^iA^b)(∂^jA^c)(∂^kA^d),
_EOM^(N),IV =
g_s^3 (D· F^a + ψΔ T^aψ)∑_i+j+k+l
=N-5C_ijkl^abcde(∂^iA^b)(∂^jA^c)(∂^kA^d)(∂^lA^e).
The coefficients C^a_1… a_n_i_1… i_n-1 appearing in Eqs. (<ref>)-(<ref>) can be written in terms of a set of independent colour tensors, each of them multiplying an associated coupling constant, as follows
C_ij^abc = f^abcκ_ij,
C_ijk^abcd=(f f)^abcdκ_ijk^(1)+d_4^abcdκ_ijk^(2)+d_4ff^abcdκ_ijk^(3),
C_ijkl^abcde = (f f f)^abcdeκ_ijkl^(1)+d_4f^abcdeκ_ijkl^(2)
with
(f f)^abcd = f^abef^cde,
(f f f)^abcde = f^abmf^mcnf^nde,
d_4^abcd = 1/4![Tr(T_A^aT_A^bT_A^cT_A^d)+symmetric permutations],
d_4ff^abcd = d_4^abmnf^mcef^edn,
d_4ff^abcd = d_4ff^abcd-1/3C_A d_4^abcd,
d_4f^abcde = d_4^abcmf^mde.
Here (T_A)^b_ac = i f^abc are the generators of the adjoint representation of the colour group. We now extend the classical Lagrangian ℒ_0 in Eq. (<ref>) to include the gauge-invariant operators of twist two as well as the EOM aliens
ℒ_GGI = ℒ_0+w_ i _ i^(N)+_EOM^(N),
where w_ i is a coupling for the operator _ i with i=g,q, playing the same role as the coefficients η(N), κ_ij, … defined in Eqs. (<ref>)-(<ref>). The Lagrangian ℒ_GGI is invariant under the generalized gauge transformation <cit.>A^a_μ→ A^a_μ + δ_ω A^a_μ + δ_ω^Δ A^a_μ, where
δ_ω A^a_μ = D^ab_μω^b(x),
δ_ω^Δ A^a_μ = -Δ_μ[η(N) ∂^N-1ω^a + g_s∑_i+j
=N-3C^aa_1a_2_ij (∂^iA^a_1) (∂^j+1ω^a_2)
+g_s^2∑_i+j+k
=N-4C^aa_1a_2a_3_ijk (∂^iA^a_1) (∂^jA^a_2) (∂^k+1ω^a_3)
+g_s^3 ∑_i+j+k+l
=N-5C^aa_1a_2a_3a_4_ijkl (∂^iA^a_1) (∂^jA^a_2) (∂^kA^a_3) (∂^l+1ω^a_4) + O(g_s^4)]
and
C_ij^abc = f^abcη_ij,
C_ijk^abcd=(f f)^abcdη_ijk^(1)+d_4^abcdη_ijk^(2)+d_4ff^abcdη_ijk^(3),
C_ijkl^abcde = (f f f)^abcdeη_ijkl^(1)+d_4f^abcdeη_ijkl^(2a)+d_4f^aebcdη_ijkl^(2b).
The generalized gauge symmetry implies that the couplings η^(k)_n_1… n_j are related to κ^(k)_n_1… n_j in Eqs. (<ref>)-(<ref>)
η_ij =2κ_ij+η(N)i+j+1i,
η^(1)_ijk =2κ_i(j+k+1)j+k+1j+2[κ_ijk^(1)+κ_kji^(1)],
η^(2)_ijk =3κ^(2)_ijk,
η^(3)_ijk =2[κ^(3)_ijk-κ^(3)_kji],
η^(1)_ijkl =2[κ_ij(l+k+1)^(1)+κ_(l+k+1)ji^(1)]l+k+1k+2[κ_ijkl^(1)+κ_ilkj^(1)+κ_likj^(1)+κ_lkij^(1)],
η^(2a)_ijkl =3κ_ij(k+l+1)^(2)k+l+1k+2κ_ijkl^(2),
η^(2b)_ijkl =2κ^(2)_lijk.
The new gauge transformations in Eq. (<ref>) are promoted to a nilpotent generalized BRST (gBRST) operator, by replacing the transformation parameter ω^a with the ghost field c^a<cit.>. In turn the ghost alien operator is generated by the action of such gBRST operator on a suitable ancestor operator <cit.>, giving
_c^(N) = _c^(N),I+_c^(N),II+_c^(N),III+_c^(N),IV+ …
with
_c^(N),I = -η(N) (∂c^a)(∂^N-1c^a),
_c^(N),II = -g_s ∑_i+j
=N-3C_ij^abc(∂c^a)(∂^iA^b)(∂^j+1c^c),
_c^(N),III = -g_s^2∑_i+j+k
=N-4C_ijk^astu(∂c^a)(∂^iA^s)(∂^jA^t)(∂^k+1c^u),
_c^(N),IV = -g_s^3∑_i+j+k+l
=N-5C_ijkl^abcde(∂c^a)(∂^iA^b)(∂^jA^c)(∂^kA^d)(∂^l+1c^e) .
Renormalization The complete Lagrangian, including the twist-two physical and alien operators, can be written as
ℒ = ℒ_0 + ℒ_GF+G + w_ i _ i + _EOM^(N) + _c^(N) =ℒ_0(A^a_μ,g_s) + ℒ_GF+G(A^a_μ,c^a,c̅^a,g_s,ξ) + ∑_ k 𝒞_ k _ k,
where 𝒞_ k labels all the distinct couplings of the operators, e.g. 𝒞_ k=(w_ i,η(N), κ_0 1, κ_1 2…). The ultraviolet (UV) singularities associated with the QCD Lagrangian are absorbed by introducing the bare fields and parameters
A^a;bare_μ(x) = √(Z_3) A^a_μ(x), c^a;bare(x)=√(Z_c) c^a(x), c̅^a;bare(x)=√(Z_c) c̅^a(x),
g_s^bare=μ^ϵ Z_g g_s, ξ^bare=√(Z_3) ξ.
We renormalize the singularities originating from the insertion of the composite operators using
_ i^ren(x) = Z_ ij _ j^bare(x),
where _ j^bare indicates the operators in Eqs. (<ref>), (<ref>) and (<ref>) written in terms of the bare fields. Note that throughout this work we use D=4-2 dimensional regularization, combined with the renormalization scheme. Z_ ij is the renormalization matrix of the operators, which makes the OMEs featuring an insertion of _ i^ren finite. The renormalized Lagrangian becomes
ℒ = ℒ_0(A^a;bare_μ,g_s^bare) + ℒ_GF+G(A^a;bare_μ,c^a;bare,c̅^a;bare,g_s^bare,ξ^bare) + ∑_ k𝒞_ k^bare _ k^bare,
𝒞_ i^bare = ∑_ k𝒞_ k Z_ k i,
where 𝒞_ k is the (finite) renormalized coupling of the operator _ k. The UV-finite OMEs featuring a single insertion of _ g^ren are computed by setting the renormalized couplings 𝒞_ i=δ_ i g in Eq. (<ref>), which gives
𝒞_ i^bare=Z_ g i.
Similarly, the renormalized OMEs with an insertion of _ q are obtained with 𝒞_ i^bare=Z_ q i. Therefore, the couplings of the bare operators η^bare(N), … are interpreted as the renormalization constants that mix the physical operators into the aliens. These quantities can be extracted from the direct calculation of the singularities of the OMEs with an insertion of _ g^bare (_ q^bare). For instance, the coupling η^bare(N), which is associated to an operator with a two-point vertex, was determined in <cit.> from the renormalization of the OMEs of _ g with two external ghosts and it was found to be [Note that the expression for η in <cit.> has an additional factor of 2. This is a consequence of the chosen conventions for dimensional regularization. In particular, we use D=4-2 while <cit.> employs D=4+.]
η^bare(N)=Z_g c = -a_s/ϵC_A/N(N-1)+O(a_s^2),
where C_A is the quadratic Casimir in the adjoint representation and
a_s=α_s/(4π)=g_s^2/(4π)^2.
The value of η was determined at two loops in <cit.> and at three loops in <cit.>. Throughout this paper we will mainly be interested in the one-loop alien couplings. As such, it will be convenient to select just the N-dependent part of the one-loop result of η^bare(N), which in the following we simply denote by η(N), i.e.
η(N) = -1/N(N-1).
The couplings of the operators featuring multiple fields, e.g., the couplings κ_ij multiply at least three fields, are determined by renormalizing OMEs with the corresponding external fields. Recently, a method to compute the counterterms of the OMEs with insertions of the gauge-invariant operators as a function of the spin N was put forward in ref. <cit.>. The result of that paper can be used to extract the coefficients κ^bare_i j up to O(a_s^2) and those of κ^(p);bare_ijk, for p=1,2, at O(a_s), finding agreement with the low-N values reported in <cit.>. In addition, the calculation of the five-point counterterms at O(a_s), which can be used to determine κ^(p)_ijkl, for p=1,2, has been announced recently <cit.>.
In this paper, we would like to determine the renormalization constants Z_ gi by solving the constraints on the couplings 𝒞_ i^bare, which are imposed by the symmetries of Eq. (<ref>). The latter is the Lagrangian in Eq. (<ref>) evaluated with bare fields and couplings constants. Therefore the two Lagrangians share the same symmetry properties, with the obvious substitutions.
For simplicity, in the rest of this paper we drop the superscript `bare', wherever it does not create any ambiguity.
Independent operators and couplings
The symmetry constraints on the couplings in Eq. (<ref>) have been derived in ref. <cit.>. Without repeating the derivation of that paper, we distinguish three types of relations, which follow from the way we have constructed the operators at the beginning of this section.
First of all, the couplings introduced in the EOM operators, see Eqs. (<ref>)-(<ref>) and (<ref>)-(<ref>), are chosen to inherit the properties of the colour structures they multiply. For example, because of the anti-symmetry of the structure constants, we take
κ_ij=-κ_ji.
This implies, e.g., that at spin N=4, where i,j=0,1, there is only one independent coupling, e.g., κ_0 1.
The second type of constraints regards the couplings that enter the ghost operators, Eqs. (<ref>)-(<ref>). Because these operators were constructed directly from the EOM ones using gBRST, the η couplings are connected to the κ ones. The relevant identities have been listed in Eqs. (<ref>)-(<ref>).
Finally, we impose the invariance of Eq. (<ref>) under the generalized transformations of anti-BRST type <cit.>, which stem from Eq. (<ref>), by replacing the transformation parameter ω^a with the anti-ghost field c̅^a. This implies the following condition on the ghost operator _c^(N) defined in Eq. (<ref>)
O_ c^(N)(A^a_μ,c^a,c̅^a) = O_ c^(N)(A^a_μ,c̅^a,c^a),
which translates into a set of constraints on the couplings in Eqs. (<ref>)-(<ref>) and, in turn, on those of the EOM operators. Taking the example of N=4, the anti-BRST relation imposes κ_01=2η(4), thus reducing the number of independent couplings even further <cit.>.
It is highly non-trivial to find all-N solutions for all the constraints. In refs. <cit.>, they were solved only for fixed values of N, in order to fix bases of independent alien operators up to N=20. In the following sections, we solve the relations with exact N dependence. This is done by setting up an ansatz for the function space that enters to leading order in a_s. The construction of this ansatz is primarily based on constraints from (anti-)gBRST. We will see below that the latter allow one to bootstrap the functional form of higher-order couplings from that of the lower-order ones. The determination of the unknown parameters in the ansatz is then performed by using the full set of colour, gBRST and anti-gBRST relations. As will become clear below, this allows one to fix most, but not all, free parameters. The few that remain then need to be determined from the explicit renormalization of a limited number of fixed-N operator matrix elements. This is particularly important for finding any overall N-dependent function.
§ IDENTITIES AMONG THE ALIEN COUPLINGS
In this section we will discuss in detail the identities between the couplings coming from the (anti)-gBRST relations. In particular, we will show that they allow one to restrict the function space of the couplings and hence constrain their generic N-dependence.
§.§ Class II couplings
The class II operators are defined in terms of two couplings, κ_ij and η_ij, which obey the following relations
κ_ij+κ_ji=0, [anti-symmetry of f]
η_ij=2κ_ij+η(N)i+j+1i, [gBRST]
η_ij+∑_s=0^i(-1)^s+js+jjη_(i-s)(j+s) = 0. [anti-gBRST]
Note that one can generate an equation for the ghost coupling alone by combining the anti-symmetry of κ_ij, Eq. (<ref>), with the gBRST relation, Eq. (<ref>),
η_ij+η_ji = η(N)[i+j+1i+i+j+1j].
The one-loop value of this coupling was first computed in <cit.>
and later corrected in <cit.>.
In our conventions it reads [Note that there are typos in the corresponding expression in <cit.>. In particular, the right-hand side of Eq. (4.38) in <cit.> should be replaced by the right-hand side of Eq. (<ref>) here.
]
η_ij =-η(N)/4[(-1)^j-3N-2i+1-N-2i]
which implies
κ_ij = -η(N)/8[(-1)^j+3N-2i-3N-2i+1].
The power of the relations described above is that they can be used to gain valuable information about the structure of the couplings at arbitrary N. For example, one can use Eq. (<ref>) to write down an ansatz for η_ij of the form
η_ij = η(N) [ c_1i+j+1i+c_2i+j+1j].
Here c_1 and c_2 are constants to be determined. We assume here that the dependence on η(N) is factorized at leading order,
as suggested by Eq. (<ref>) and observed in Eq. (<ref>).
This ansatz can then be substituted in the anti-gBRST consistency relation, Eq. (<ref>), yielding
η_ij+∑_s=0^i(-1)^s+js+jjη_(i-s)(j+s) =η(N) [ (-1)^jc_1-c_2i+j+1j]
for even values of N. Hence, only the trivial solution c_1=c_2=0 obeys the anti-gBRST relation. However, the right-hand side of Eq. (<ref>) suggests the inclusion of a term proportional to (-1)^j to the ansatz,
η_ij =η(N) [ c_1i+j+1i+c_2i+j+1j+c_3(-1)^j].
The anti-gBRST relation now becomes
η_ij+∑_s=0^i(-1)^s+js+jjη_(i-s)(j+s) = η(N) (c_1+c_3)[(-1)^j+i+j+1i]
such that c_3=-c_1 is a consistent solution. If we now impose also Eq. (<ref>) then we obtain the relation c_1+c_2=-4, leaving just one free parameter unconstrained.
Hence
η_ij =η(N) [ c_1[i+j+1i-(-1)^j]-(4+c_1)i+j+1j].
It should be noted that, if an ansatz is generated using (anti-)gBRST relations, one is in principle free to add non-zero functions that live in the kernel of these relations. For example, if one adds a term of the form
-f(N)/4((-1)^j + N - 2i + 1 - N - 2i)
to Eq. (<ref>), the corresponding expression for η_ij still obeys the constraints. Here f(N) represents an arbitrary function of N, with the actual solution being recovered by setting f(N)=0 for even values of N. In particular, substituting Eq. (<ref>) in the constraint coming from anti-symmetry and gBRST, cf. Eq. (<ref>), one finds
[(-1)^i+(-1)^j ]f(N)=0.
The left-hand side of this expression always vanishes for all physical (even) values of N, independent of the functional form of f(N). In general, the exclusion of this type of function can only be confirmed by comparison with fixed-N computations.
An important consequence is that now we have recovered the full function space of the actual solution, Eq. (<ref>), using only the symmetry relations of the couplings. More generally, note that Eq. (<ref>) is an example of a conjugation relation, in the sense that a second application of the sum leads to
∑_t=0^i(-1)^t+jt+jjη_(i-t)(j+t) = -∑_t=0^i(-1)^t+jt+jj∑_s=0^i-t(-1)^s+j+ts+j+tj+tη_(i-t-s)(j+t+s)
and hence
η_ij = ∑_t=0^it+jj∑_s=0^i-t(-1)^ss+j+tj+tη_(i-t-s)(j+t+s).
The latter identity is actually always true for any discrete two-variable function η_ij. This type of conjugation relation has already been encountered in the computation of the anomalous dimensions of twist-two operators in non-forward kinematics, see e.g. <cit.>, and holds great predictive power. In particular, it provides valuable information about the function space of the object at hand. To take full advantage of such relations, one needs to be able to evaluate them analytically. This is possible by using principles of symbolic summation, in particular by application of the creative telescoping algorithm <cit.>. The latter is a generalization of classical telescoping and attempts to evaluate the sum of interest by rewriting it as a recursion relation using Gosper's algorithm <cit.>. The closed-form expression of the sum then corresponds to the linear combination of the solutions of the recursion that has the same initial values as the sum. This methodology is neatly implemented in the Mathematica package Sigma<cit.>. For the class III and IV couplings to be described below we will also encounter identities involving multiple sums, for which the package EvaluateMultiSums<cit.> can be used.
§.§ Class III couplings
§.§.§ κ_ijk^(1) and η_ijk^(1)
The couplings η_ijk^(1) and κ_ijk^(1) can be thought of as direct generalizations of η_ij and κ_ij in the class II operators. They obey the following relations
κ_ijk^(1)+κ_ikj^(1)=0, [anti-symmetry of f]
κ_ijk^(1)+κ_jki^(1)+κ_kij^(1) = 0, [Jacobi identity]
η_ijk^(1)=2κ_i(j+k+1)j+k+1j+2[κ_ijk^(1)+κ_kji^(1)], [gBRST]
η_ijk^(1)=∑_m=0^i∑_n=0^j(m+n+k)!/m!n!k!(-1)^m+n+kη_(j-n)(i-m)(k+m+n)^(1). [anti-gBRST]
Note that now the indices are constrained as i+j+k=N-4. As before, one can combine the relations of the EOM coupling with the gBRST relation to connect η_ijk^(1) to κ_ijk^(1). In particular we find
η_ijk^(1)+η_ikj^(1) = 2κ_i(j+k+1)j+k+2j+1+2[κ_kji^(1)+κ_jki^(1)]
when combining the anti-symmetry property of κ_ijk^(1) with the gBRST identity. Similarly the combination of the Jacobi identity with gBRST leads to
η_ijk^(1)+η_kij^(1)+η_jki^(1) = 2κ_i(j+k+1)j+k+1j+2κ_k(i+j+1)i+j+1i+2κ_j(i+k+1)i+k+1k.
The latter identity relates the class III coupling η_ijk^(1), which is O(g_s^2),
to the class II coupling κ_ij of O(g_s), i.e. at one order lower in perturbation theory.
As such, we can use it to determine the function space of η_ijk^(1). Taking into account all independent permutations of i, j and k, we find that this function space is 18-dimensional
{ (-1)^i+ji+j+1i,N-2k+1i+j+1i,N-2ki+j+1i,(-1)^j+kj+k+1j,
N-2i+1j+k+1j,N-2ij+k+1j,(-1)^i+ki+k+1k,N-2j+1i+k+1k,
N-2ji+k+1k + independent permutations of i, j and k}.
Furthermore, due to the close relationship between η_ijk^(1) and κ_ijk^(1), we assume that the functional form of the latter is constructed from the same functions.
Hence in total we have 36 free parameters. Using the relations described above, cf. Eqs. (<ref>)-(<ref>), we are able to fix 34 of these. The final two free parameters are then determined using the one-loop results κ_110^(1)=0 and κ_121^(1)=13 C_A/336, which follow from the explicit operator renormalization for N=6 and N=8 respectively. Our final result for κ_ijk^(1) then becomes
κ_ijk^(1) = η(N)/48{2(-1)^i + ji+j+1 i+ (-1)^i + ki+k+1 k + 3(-1)^j + k+1j+k+1j +i+k+1 i[ 2(-1)^i + k+1 + 5N-1j+1] + j+k+1k[3(-1)^j + k -
10N-2i + 4N-2i+1] +
i+j+1j[(-1)^i+j+1 + 5N-2k - 9N-2k+1] }.
To verify this expression, agreement with explicitly computed fixed-N values has been established up to N=20. The necessary direct computations at fixed values of N of Feynman diagrams for the OMEs
with (physical or alien) spin-N twist-two operators ^(N) inserted
in Green's functions with off-shell quarks, gluons or ghosts are performed with the setup used and described in <cit.> for the computation of moments of four-loop QCD splitting functions.
In particular, the Forcer package <cit.>, written in Form <cit.>, is used for the parametric reductions of the two-point functions up to four loops for fixed even integer values of N. Substituting our result for κ_ijk^(1) into the gBRST relation, Eq. (<ref>), allows one to also reconstruct the full N-dependence of η_ijk^(1)
η_ijk^(1) = -η(N)/24{5(-1)^i+j+1i+j+1i+(-1)^i+ki+k+1k+2(-1)^j+k+1j+k+1j+i+k+1i[(-1)^i+k+4N-2j+1]+j+k+1k[5(-1)^j+k+1-3N-2i+N-2i+1]+i+j+1j[4(-1)^i+j-15N-2k-5N-2k+1]}.
We have verified that Eqs. (<ref>) and (<ref>) are in agreement with the results of ref. <cit.>, as explained in Sec. <ref> below.
§.§.§ κ_ijk^(2) and η_ijk^(2)
The next alien couplings we consider are κ_ijk^(2) and η_ijk^(2), which obey the following relations
κ_ijk^(2)=κ_jik^(2)=κ_ikj^(2)=κ_kji^(2)=κ_jki^(2)=κ_kij^(2), [symmetry of d_4]
η_ijk^(2) = 3κ_ijk^(2), [gBRST]
η_ijk^(2) = ∑_m=0^i∑_n=0^j(-1)^m+n+k(m+n+k)!/m!n!k!η_(i-m)(j-n)(m+n+k)^(2). [anti-gBRST]
As the anti-gBRST equation has a similar form as the one for η_ijk^(1), cf. Eq. (<ref>), we assume the function space for η_ijk^(2) and κ_ijk^(2) to be the same as above, cf. Eq. (<ref>). Imposing Eqs. (<ref>)-(<ref>) then allows one to fix all but one of the unknowns. Hence we find expressions η_ijk^(2) and κ_ijk^(2) with only one (overall) free parameter
κ_ijk^(2) = c{(-1)^i+ji+j+2i+1+(-1)^i+ki+k+2i+1+j+k+2j+1[(-1)^j+k+N-1i+1]},
η_ijk^(2) = 3κ_ijk^(2).
Note that the c parameter can a priori be some N-dependent function. A computation of the OMEs at a few fixed values of N with the procedure outlined in Sec. <ref> for the renormalization of the respective operators fixes c=1/N/(N-1) such that
κ_ijk^(2) = 1/N(N-1){(-1)^i+ji+j+2i+1+(-1)^i+ki+k+2i+1+j+k+2j+1[(-1)^j+k+N-1i+1]},
η_ijk^(2) = 3κ_ijk^(2).
Noting that
1/N(N-1) = -η(N)
we see that also for these two couplings η(N) factorizes, which is not expected a priori from the constraints.
§.§.§ κ_ijk^(3) and η_ijk^(3)
The last set of couplings in the class III alien operators, κ_ijk^(3) and η_ijk^(3), obey the following relations
κ_ijk^(3)=κ_ikj^(3), [symmetry]
κ_ijk^(3)+κ_kij^(3)+κ_jki^(3)=0, [generalized Jacobi identity]
η_ijk^(3) = 2(κ_ijk^(3)-κ_kji^(3)), [gBRST]
η_ijk^(3) = ∑_m=0^i∑_n=0^j(-1)^m+n+k(m+n+k)!/m!n!k!η_(j-n)(i-m)(m+n+k)^(3). [anti-gBRST]
As before, we suggest the same function space as for κ_ijk^(1) and η_ijk^(1), cf. Eq. (<ref>). The above relations then only leave two parameters unfixed, such that we have
κ_ijk^(3) = c_1 (-1)^i+ji+j+1i+c_2 (-1)^i+ki+k+1k+j+k+1j[(c_1+c_2) (-1)^j+k+1
+(c_1+c_2) (-1)^j+k+1 +c_1 N-2i+1]+i+k+1i[c_1 (-1)^i+k+(2 c_1+c_2)
N-2j+c_2 N-2j+1]+i+j+1j[c_2 (-1)^i+j-(2 c_1+c_2) N-2k-(c_1+c_2) N-2k+1]
with c_1, c_2 to be determined. We emphasize that, as before, these could be N-dependent functions [In this case we expect c_1∼ c_2∼η(N).]. The corresponding expression for η_ijk^(3) depends on the same parameters through the gBRST relation, cf. Eq. (<ref>).
Since the couplings κ_ijk^(3) and η_ijk^(3) do not appear through operator mixing in the renormalization of physical OMEs up to four loops, we leave the two free parameters c_1, c_2 in Eq. (<ref>) undetermined, for the time being.
We will address this issue again when extending the computation of low-N non-singlet anomalous dimensions at five loops <cit.> to the flavor-singlet sector.
§.§ Class IV couplings
§.§.§ κ_ijkl^(1) and η_ijkl^(1)
We have the following set of relations
κ_ijkl^(1)+κ_ijlk^(1) = 0, [anti-symmetry]
κ_ijkl^(1)+κ_iklj^(1)+κ_iljk^(1) = 0, [Jacobi]
κ_ijkl^(1)+κ_jilk^(1)+κ_lkji^(1)+κ_klij^(1) = 0, [double Jacobi]
η_ijkl^(1) = 2[κ_ij(l+k+1)^(1)+κ_(l+k+1)ji^(1)]l+k+1k+2[κ_ijkl^(1)+κ_ilkj^(1)+κ_likj^(1)+κ_lkij^(1)], [gBRST]
η_ijkl^(1) = -∑_s_1=0^i∑_s_2=0^j∑_s_3=0^k(s_1+s_2+s_3+l)!/s_1!s_2!s_3!l!(-1)^s_1+s_2+s_3+lη_(k-s_3)(j-s_2)(i-s_1)(s_1+s_2+s_3+l)^(1) [anti-gBRST]
with now i+j+k+l=N-5.
Combining the double Jacobi identity, Eq. (<ref>), with the gBRST one, Eq. (<ref>), allows one to write η_ijkl^(1) in terms of κ_ijk^(1) appearing already in the class III operators at one order in perturbation theory lower,
η_ijkl^(1) + η_jilk^(1) + η_lkji^(1) + η_klij^(1) = 2[κ_ij(k+l+1)^(1)+κ_(k+l+1)ji^(1)]k+l+1k + 2[κ_ji(k+l+1)^(1)+κ_(k+l+1)ij^(1)]k+l+1l + 2[κ_lk(i+j+1)^(1)+κ_(i+j+1)kl^(1)]i+j+1j + 2[κ_kl(i+j+1)^(1)+κ_(i+j+1)lk^(1)]i+j+1i.
As such, we can use the expression we have computed for κ_ijk^(1), cf. Eq. (<ref>), to determine the function space of η_ijkl^(1). Taking into account all the independent permutations of the indices i, k, j and l this space is now 264-dimensional. Assuming that the functional form of κ_ijkl^(1) is similar to the one of η_ijkl^(1) then implies that in total we now have 528 parameters to fix. However, after implementing all of the above relations, only 8 remain in the end. The latter can again be fixed from the explicit renormalization of a few fixed-N matrix elements. More specifically we extracted them by performing a small momentum expansion around the limit p_3,p_4,p_5 → 0 of the OME
⟨ O_ g^(N); c̅(p_1) c(p_2) g(p_3) g(p_4) g(p_5))⟩ .
This expansion is achieved on a diagram-by-diagram basis using the expansion-by-subgraph method <cit.>
to second order at N=10 and to third order at N=12. By expanding sequentially in the external gluon momenta p_3,p_4 and p_5 the integrals are reduced to simple one-scale propagator integrals. We have implemented the expansion-by-subgraph in Maple<cit.> and then subsequently evaluated the expressions in Form. This methodology was also used to cross-check the expressions for κ_ij^(1) and κ_ijk^(r=1,2) up to N=20. At one loop the poles of the OME, Eq. (<ref>), are generated purely by the ghost alien operator O_c^(N),IV allowing for a clean extraction of η_ijkl^(1) renormalization constants, from which the κ_ijkl^(1) values can be obtained. In particular, in order to determine the remaining constants in the all-N ansatz for κ_ijkl^(1), we use
κ^(1)_0210=-1/128C_A,
κ^(1)_0050=109/1440C_A,
κ^(1)_0104=-935/6912C_A,
κ^(1)_1006=-2537/16896C_A.
We then find
κ_ijkl^(1)=-η(N)/384{[6(-1)^j+ki+l+1i-3(-1)^j+ki+l+1l+7(-1)^j+k+lj+k+l+2l+7(-1)^j+k+lj+k+l+2j+k+1-27N-1i+1j+k+l+2j+k+1+2i+j+k+2j+k+1[2(-1)^i+j+k+9N-2l]-2i+j+k+2i[4(-1)^i+j+k+15N-2l+1]]j+k+1j-[5(-1)^j+ki+l+1i-4(-1)^j+ki+l+1l-14(-1)^j+k+lj+k+l+2l+7(-1)^j+k+lj+k+l+2j+k+1+54N-2ij+k+l+2j+k+1+i+j+k+2j+k+1[-3(-1)^i+j+k+4N-1l+1]+i+j+k+2i[3(-1)^i+j+k+13N-1l+1]]j+k+1k+(-1)^i+j+k+1i+k+1ki+j+k+2j-2(-1)^i+j+ki+j+1ji+j+k+2k-6(-1)^j+li+k+1ij+l+1j+3(-1)^j+li+k+1kj+l+1j+5(-1)^j+li+k+1ij+l+1l-4(-1)^j+li+k+1kj+l+1l+30N-2k+1j+l+1ji+j+l+2i-5(-1)^i+j+lj+l+1li+j+l+2i+13N-1k+1j+l+1li+j+l+2i-4(-1)^i+j+li+l+1ii+j+l+2j+12N-2ki+l+1ii+j+l+2j+24N-2k+1i+l+1ii+j+l+2j-3(-1)^i+j+li+l+1li+j+l+2j+49N-1k+1i+l+1li+j+l+2j-30N-1k+1i+j+1ii+j+l+2l+2(-1)^i+j+li+j+1ji+j+l+2l-60N-2k+1i+j+1ji+j+l+2l+8(-1)^i+j+li+l+1ii+j+l+2i+l+1-6N-2k+1i+l+1ii+j+l+2i+l+1-3(-1)^i+j+li+l+1li+j+l+2i+l+1+71N-1k+1i+l+1li+j+l+2i+l+1-11(-1)^k+li+j+1ik+l+1k+7(-1)^k+li+j+1jk+l+1k+11(-1)^k+li+j+1ik+l+1l-7(-1)^k+li+j+1jk+l+1l+60N-2j+1k+l+1ki+k+l+2i-10(-1)^i+k+lk+l+1li+k+l+2i+26N-1j+1k+l+1li+k+l+2i+(-1)^i+k+l+1i+l+1ii+k+l+2k-4N-1j+1i+l+1ii+k+l+2k-2(-1)^i+k+li+l+1li+k+l+2k-44N-2ji+l+1li+k+l+2k-26N-2j+1i+l+1li+k+l+2k-15N-1j+1i+k+1ii+k+l+2l+(-1)^i+k+li+k+1ki+k+l+2l-30N-2j+1i+k+1ki+k+l+2l+5(-1)^i+k+li+l+1ii+k+l+2i+l+1-10N-1j+1i+l+1ii+k+l+2i+l+1+(-1)^i+k+li+l+1li+k+l+2i+l+1-18N-2j+1i+l+1li+k+l+2i+l+1-14(-1)^j+k+lk+l+1lj+k+l+2j-7(-1)^j+k+lj+l+1lj+k+l+2k}
and
η_ijkl^(1)=-η(N)/96{[[(j+k+l+2l+5j+k+l+2j+k+1)(-1)^l+1+3i+l+1i+3i+l+1l](-1)^j+k-17(-1)^i+j+ki+j+k+2i+i+j+k+2j+k+1[13(-1)^i+j+k+54N-2l]]j+k+1j+[-3(-1)^j+ki+l+1i-3(-1)^j+ki+l+1l+17(-1)^j+k+lj+k+l+2l+7(-1)^j+k+lj+k+l+2j+k+1+6N-2ij+k+l+2j+k+1+i+j+k+2i[(-1)^i+j+k+6N-1l+1]+i+j+k+2j+k+1[(-1)^i+j+k+6N-1l+1]]j+k+1k-12(-1)^i+j+ki+j+1ji+j+k+2k+(-1)^j+l+1i+k+1ij+l+1j+(-1)^j+l+1i+k+1kj+l+1j+(-1)^j+l+1i+k+1ij+l+1l+(-1)^j+l+1i+k+1kj+l+1l-3(-1)^i+j+li+l+1ii+j+l+2j-18N-2ki+l+1ii+j+l+2j+18N-2k+1i+l+1ii+j+l+2j-3(-1)^i+j+li+l+1li+j+l+2j+3N-1k+1i+l+1li+j+l+2j+18N-1k+1i+j+1ii+j+l+2l-6(-1)^i+j+li+j+1ji+j+l+2l-30N-2k+1i+j+1ji+j+l+2l+3(-1)^i+j+li+l+1ii+j+l+2i+l+1+12N-2k+1i+l+1ii+j+l+2i+l+1+3(-1)^i+j+li+l+1li+j+l+2i+l+1+3N-1k+1i+l+1li+j+l+2i+l+1+7(-1)^k+li+j+1ik+l+1k-5(-1)^k+li+j+1jk+l+1k+17(-1)^k+li+j+1ik+l+1l-13(-1)^k+li+j+1jk+l+1l-18N-2j+1k+l+1ki+k+l+2i-2(-1)^i+k+lk+l+1li+k+l+2i-10N-1j+1k+l+1li+k+l+2i+(-1)^i+k+l+1i+l+1ii+k+l+2k-5N-1j+1i+l+1ii+k+l+2k+(-1)^i+k+l+1i+l+1li+k+l+2k+10N-2ji+l+1li+k+l+2k-26N-2j+1i+l+1li+k+l+2k+(-1)^i+k+l+1i+l+1ii+k+l+2i+l+1-5N-1j+1i+l+1ii+k+l+2i+l+1-3(-1)^i+k+li+l+1li+k+l+2i+l+1-24N-2j+1i+l+1li+k+l+2i+l+1+4(-1)^j+k+lk+l+1lj+k+l+2j}.
We have checked the correctness of these expressions by comparing with fixed-N computations up to N=14.
§.§.§ κ_ijkl^(2), η_ijkl^(2a) and η_ijkl^(2b)
For this final set of couplings we have the following relations
κ_ijkl^(2)+κ_ijlk^(2) = 0, [anti-symmetry]
κ_ijkl^(2) = κ_jikl^(2), [symmetry of d_4]
η_ijkl^(2a) = 3κ_ij(k+l+1)^(2)k+l+1k+2κ_ijkl^(2), [gBRST (a)]
η_ijkl^(2b) = 2κ_lijk^(2), [gBRST (b)]
η_ijkl^(2a) = -∑_s_1=0^i∑_s_2=0^j∑_s_3=0^k(s_1+s_2+s_3+l)!/s_1!s_2!s_3!l!(-1)^s_1+s_2+s_3+l ×
× η_(i-s_1)(j-s_2)(k-s_3)(s_1+s_2+s_3+l)^(2a), [anti-gBRST (a)]
η_ijkl^(2b) = η_ikjl^(2a) - η_ijkl^(2a) + ∑_s_1=0^i∑_s_2=0^j∑_s_3=0^k(s_1+s_2+s_3+l)!/s_1!s_2!s_3!l!(-1)^s_1+s_2+s_3+l ×
× η_(i-s_1)(j-s_2)(k-s_3)(s_1+s_2+s_3+l)^(2b). [anti-gBRST (b)]
Note that Eqs. (<ref>) and (<ref>) can be combined to express η_ijkl^(2a) in terms of the class III coupling κ_ijk^(2) as
η_ijkl^(2a) + η_ijlk^(2a) = 3κ_ij(k+l+1)^(2)k+l+2k+1.
Using the expression we derived for κ_ijk^(2), cf. Eq. (<ref>), this becomes
η_ijkl^(2a) + η_ijlk^(2a) = 3c{(-1)^i+ji+j+2i+1-(-1)^i+k+li+k+l+3i+1+j+k+l+3j+1[-(-1)^j+k+l+N-1i+1]}k+l+2k+1
with c to be determined. Likewise one can use Eqs. (<ref>) and (<ref>) to write
η_ijkl^(2a) - η_jikl^(2a) = 0.
To obtain this last identity we used the symmetry property of κ_ijk^(2), cf. Eq. (<ref>).
The complete solution of the gBRST constraints in Eqs. (<ref>)-(<ref>) proceeds in complete analogy to the previous cases.
However, similar to the class III couplings in Sec. <ref>, also
κ_ijkl^(2), η_ijkl^(2a) and η_ijkl^(2b)
do not enter in the operator renormalization of physical OMEs up to four loops,
hence we will not consider them further here.
§ FEYNMAN RULES OF ALIEN OPERATORS
In this section we derive the Feynman rules of the alien operators. These were computed up to two loops in <cit.>, and an extension to the three-loop level was recently presented in <cit.>.
The Feynman rules for the gauge-invariant (physical) quark and gluon operators, up to the four-loop level, can be found e.g. in <cit.> and references therein.
The generalization to arbitrary orders in perturbation theory is given in <cit.>[Note that <cit.> also presents the corresponding rules for the operators with total derivatives, relevant for non-zero momentum flow through the operator vertex.].
We assume all momenta to be incoming and the total momentum flowing through the operator vertex to be zero, implying
∑_ip_i = 0.
§.§ Ghost operators
The momenta of the ghost fields are taken to be p_1 and p_2, while p_3,p_4,… denote the momenta of any additional gluons.
As a check, we will compare our Feynman rules against the known ghost vertices with up to two additional gluons, which were computed in <cit.>.
Because of different conventions for the operator definitions, the rules for the ghost vertices in the latter have to be divided by i^N. We can write the perturbative expansion of the ghost operator, cf. Eq. (<ref>), as
with
ε^ab = 1+(-1)^N/2i^Nη(N)δ^ab(Δ· p_1)^N,
ε^ab,c_1_μ = 1+(-1)^N/2i^N-1f^a c_1 b∑_i+j
=N-3η_ij(Δ· p_1)(Δ· p_3)^i(Δ· p_2)^j+1,
ε^ab,c_1 c_2_μν(p_1,p_2,p_3,p_4) = 1+(-1)^N/2i^NΔ_μΔ_ν{(f f)^a c_1 c_2 b∑_i+j+k
=N-4η_ijk^(1)(Δ· p_1)(Δ· p_3)^i(Δ· p_4)^j(Δ· p_2)^k+1 +d_4^a c_1 c_2 b∑_i+j+k
=N-4η_ijk^(2)(Δ· p_1)(Δ· p_3)^i(Δ· p_4)^j(Δ· p_2)^k+1 +d_4ff^a c_1 c_2 b∑_i+j+k
=N-4η_ijk^(3)(Δ· p_1)(Δ· p_3)^i(Δ· p_4)^j(Δ· p_2)^k+1} + [(p_3,μ,c_1)↔ (p_4,ν,c_2)],
ε^ab,c_1 c_2 c_3_μνρ(p_1,p_2,p_3,p_4,p_5) = -1+(-1)^N/2i^N-1{(f f f)^a c_1 c_2 c_3 b∑_i+j+k+l
=N-5η_ijkl^(1)(Δ· p_1)(Δ· p_3)^i(Δ· p_4)^j×(Δ· p_5)^k(Δ· p_2)^l+1+d_4f^a c_1 c_2 c_3 b∑_i+j+k+l
=N-5η_ijkl^(2a)(Δ· p_1)(Δ· p_3)^i(Δ· p_4)^j(Δ· p_5)^k(Δ· p_2)^l+1+d_4f^a b c_1 c_2 c_3∑_i+j+k+l
=N-5η_ijkl^(2b)(Δ· p_1)(Δ· p_3)^i(Δ· p_4)^j(Δ· p_5)^k(Δ· p_2)^l+1} + permutations
where the `+ permutations' in the O(g_s^3) rule in Eq. (<ref>) denotes the fact that all permutations of the gluonic quantities (momenta, Lorentz and colour indices) have to be added. Note that p_2 in ε^ab in Eq. (<ref>) was eliminated using momentum conservation, p_2=-p_1. This then agrees with Eq. (5.20) in <cit.> after dividing the latter by i^N, as discussed above.
Similarly, after performing the summation, the O(g_s) rule exactly matches Eq. (5.21) in <cit.>.
At O(g_s^2), our expression for ε^ab,c_1 c_2_μν(p_1,p_2,p_3,p_4) should be compared against Eq. (5.22) in <cit.>.
With η_ijk^(1) given by Eq. (<ref>) and η_ijk^(2) by Eq. (<ref>), we find exact agreement with that expression [The term in our expression proportional to (f f)^a c_1 c_2 b should be compared to the f^a_1 a_3 af^a_2 a_4 a part of Eq. (5.22) in <cit.> while our d_4^a c_1 c_2 b rule should be compared to the one proportional to d_4^a_1 a_2 a_3 a_4/C_A.].
Finally the Feynman rule for the d_4ff part of the ghost operator is computed using
η_ijk^(3) = 2{(c_1-c_2)(-1)^i+k[i+k+1i-i+k+1k]+(-1)^j+k+1[(c_1+2c_2)j+k+1j+(2c_1+c_2)j+k+1k]+i+j+1i[(2c_1+c_2)(-1)^i+j-c_1N-2k+1]+i+j+1j[c_1(-1)^i+j+2c_2(-1)^i+j-2c_1N-2k-c_1N-2k+1-c_2N-1k+1]+i+k+1i[2c_1N-2j+c_2N-1j+1]-i+k+1k[2c_1N-2j+c_2N-1j+1]+j+k+1j[2c_1N-2i+c_1N-2i+1+c_2N-1i+1]+c_1N-2i+1j+k+1k}
which follows from Eqs. (<ref>) and (<ref>).
As discussed in Sec. <ref>, the free parameters c_1,c_2 can be determined by a computation of fixed-N OMEs.
§.§ Alien gluon operators
Next we derive the Feynman rules for the gluonic EOM operator, whose perturbative expansion can be written as
with
𝒢_μν^c_1 c_2(p_1,p_2) = 1+(-1)^N/2i^Nη(N)δ^c_1 c_2(Δ· p_1)^N-2[2p_1^2Δ_μΔ_ν-(Δ· p_1)(Δ_μp_1ν+Δ_νp_1μ)],
𝒢_μνρ^c_1 c_2 c_3(p_1,p_2,p_3) = -1+(-1)^N/2i^N-1f^c_1 c_2 c_3{η(N)(Δ· p_1)^N-2Δ_μ[p_3νΔ_ρ-g_νρ(Δ· p_3)+Δ_ρ(p_2+p_3)_ν]+Δ_νΔ_ρ[p_1^2Δ_μ-p_1μ(Δ· p_1)]∑_i+j
=N-3κ_ij(Δ· p_2)^i (Δ· p_3)^j }+ permutations,
𝒢_μνρσ^c_1 c_2 c_3 c_4(p_1,p_2,p_3,p_4) = 1+(-1)^N/2i^N-2f^c_1 c_2 xf^x c_3 c_4{[Δ_νΔ_ρΔ_σ(p_1+2p_2)_μ-g_μνΔ_ρΔ_σ(Δ· p_2)]∑_i+j
=N-3κ_ij(Δ· p_3)^i(Δ· p_4)^j-g_νρΔ_μΔ_σ(Δ· p_1)^N-2+[p_1^2Δ_μ-p_1μ(Δ· p_1)]Δ_νΔ_ρΔ_σ∑_i+j+k
=N-4κ^(1)_ijk(Δ· p_2)^i(Δ· p_3)^j(Δ· p_4)^j}+1+(-1)^N/2[p_1^2Δ_μ-p_1μ(Δ· p_1)]Δ_νΔ_ρΔ_σ{d_4^c_1 c_2 c_3 c_4∑_i+j+k
=N-4κ^(2)_ijk(Δ· p_2)^i(Δ· p_3)^j(Δ· p_4)^j+d_4ff^c_1 c_2 c_3 c_4∑_i+j+k
=N-4κ^(3)_ijk(Δ· p_2)^i(Δ· p_3)^j(Δ· p_4)^j}+ permutations,
𝒢_μνρστ^c_1 c_2 c_3 c_4 c_5(p_1,p_2,p_3,p_4,p_5) = 1+(-1)^N/2i^N-1f^c_1 c_2 xf^x c_3 yf^y c_4 c_5{-g_μρΔ_νΔ_σΔ_τ∑_i+j
=N-3κ_ij(Δ· p_4)^i(Δ· p_5)^j+Δ_ρΔ_σΔ_τ[(p_1+2p_2)_μΔ_ν-(Δ· p_2)g_μν]∑_i+j+k
=N-4κ_ijk^(1)(Δ· p_3)^i(Δ· p_4)^j(Δ· p_5)^k+[p_1^2Δ_μ-p_1μ(Δ· p_1)]Δ_νΔ_ρΔ_σΔ_τ∑_i+j+k+l
=N-5κ_ijkl^(1)(Δ· p_2)^i(Δ· p_3)^j(Δ· p_4)^k(Δ· p_5)^l}+1+(-1)^N/2i^N-1d_4f^c_1 c_2 c_3 c_4 c_5{Δ_μΔ_νΔ_ρ[(p_4+2p_5)_σΔ_τ-(Δ· p_5)g_στ]∑_i+j+k
=N-4κ_ijk^(2)(Δ· p_1)^i(Δ· p_2)^j(Δ· p_3)^k+[p_1^2Δ_μ-p_1μ(Δ· p_1)]Δ_νΔ_ρΔ_σΔ_τ∑_i+j+k+l
=N-5κ_ijkl^(2)(Δ· p_2)^i(Δ· p_3)^j(Δ· p_4)^k(Δ· p_5)^l}+ permutations,
where again all permutations of gluon momenta, Lorentz and colour indices have to be added, if indicated by `+ permutations'. Note that p_2 in 𝒢_μν^c_1 c_2(p_1,p_2) in Eq. (<ref>) was again eliminated using momentum conservation. This then agrees with Eq. (5.23) in <cit.> and Eq. (243) in <cit.> after dividing the latter rules by i^N to match to our conventions.
For the O(g_s) EOM vertex three contributions need to be taken into account,
* the non-Abelian part of the field strength in the class I operator with D→∂,
* the O(g_s) part of the covariant derivative acting on the Abelian part of the field strength in the class I operator and
* the class II operator, cf. Eq. (<ref>), with D→∂ and keeping only the Abelian part of the field strength.
Our result matches the corresponding rules in the literature, cf. Eq. (5.24) in <cit.> and Eq. (244) in <cit.> respectively (again after dividing by the overall i^N).
Next the four-gluon vertex gets four contributions,
* the O(g_s) part of the covariant derivative acting on the non-Abelian part of the field strength in the class I operator,
* the non-Abelian part of the field strength in the class II operator with D→∂,
* the O(g_s) part of the covariant derivative acting on the Abelian part of the field strength in the class II operator and
* the class III operator, cf. Eq. (<ref>), with D→∂ and keeping only the Abelian part of the field strength.
The second and third contributions depend on the lower-order coupling κ_ij, while the fourth one is written in terms of the couplings κ_ijk^(1), κ_ijk^(2) and κ_ijk^(3) given by Eqs. (<ref>), (<ref>) and (<ref>) respectively. The (f f) and d_4 parts of our rule agree with Eq. (5.25) in <cit.>[Note however that our result proportional to d_4 needs to be multiplied by a symmetry factor of 1/4! to match Eq. (5.25) in <cit.>.], while the d_4ff part is new.
Finally, as a new result [The corresponding result within the framework of ref. <cit.> was recently announced in a conference talk <cit.>.], we consider the five-gluon vertex
𝒢_μνρστ^c_1 c_2 c_3 c_4 c_5(p_1,p_2,p_3,p_4,p_5)
in Eq. (<ref>).
Again we need to take into account higher-order contributions of the lower-point vertices. In particular, the (f f f) part of the five-gluon rule gets four contributions,
* the O(g_s) part of the covariant derivative acting on the non-Abelian part of the field strength in the class II operator,
* the non-Abelian part of the field strength in the class III operator with D→∂,
* the O(g_s) part of the covariant derivative acting on the Abelian part of the field strength in the class III operator,
* the class IV operator, cf. Eq. (<ref>), with D→∂ and keeping only the Abelian part of the field strength.
On the other hand the d_4f part only gets three contributions,
* the non-Abelian part of the field strength in the class III operator with D→∂,
* the O(g_s) part of the covariant derivative acting on the Abelian part of the field strength in the class III operator and
* the class IV operator, cf. Eq. (<ref>), with D→∂ and keeping only the Abelian part of the field strength.
§.§ Alien quark operators
Finally in this section we provide the Feynman rules for the alien quark operators presented in Eqs. (<ref>)-(<ref>). As mentioned above, these operators are written in terms of the same couplings as those in the gluon EOM operators. Assuming the momenta of the external quark fields to be p_1 and p_2 we have the following perturbative expansion
with
𝒬(p_1,p_2) = 0,
𝒬_ μ^c_1(p_1,p_2,p_3) = -1+(-1)^N/2i^Nη(N)T^c_1Δ_μΔ(Δ· p_3)^N-2,
𝒬_ μν^c_1 c_2(p_1,p_2,p_3,p_4) = [1+(-1)^N]i^N-1T^af^a c_1 c_2Δ_μΔ_νΔ∑_i+j
=N-3κ_ij^(1)(Δ· p_3)^i(Δ· p_4)^j,
𝒬_ μνρ^c_1 c_2 c_3(p_1,p_2,p_3,p_4,p_5) = [1+(-1)^N]i^NT^aΔ_μΔ_νΔ_ρΔ{f^a c_1 xf^c_2 c_3 x∑_i+j+k
=N-4κ_ijk^(1)(Δ· p_3)^i(Δ· p_4)^j(Δ· p_5)^k
+ d_4^a c_1 c_2 c_3∑_i+j+k
=N-4κ_ijk^(2)(Δ· p_3)^i(Δ· p_4)^j(Δ· p_5)^k + d_4ff^a c_1 c_2 c_3∑_i+j+k
=N-4κ_ijk^(3)(Δ· p_3)^i(Δ· p_4)^j(Δ· p_5)^k}+ [(p_3,μ,c_1)↔(p_4,ν,c_2)] + [(p_3,μ,c_1)→(p_5,ρ,c_3)→(p_4,ν,c_2)→(p_3,μ,c_1)]
The vertices with up to two additional gluons can be compared against the results presented in Eqs. (5.17)-(5.19) of <cit.>. Dividing the latter by i^N to match to our conventions, we find exact agreement.
Note that Eqs. (<ref>)-(<ref>) contain an additional factor of two coming from the [(p_4,ν,c_2)↔(p_5,ρ,c_3)] permutation. This directly follows from the (anti-)symmetry properties of the κ-couplings, cf. Eqs. (<ref>), (<ref>) and (<ref>).
Finally, because the κ-couplings enter the quark operator at one order in the strong coupling lower than in the gluon EOM one, we can push the perturbative expansion of the quark operator to one order higher.
Consequently we also present the quark operator vertex at O(g_s^4) with four additional gluons
We find
𝒬^c_1 c_2 c_3 c_4_μνρσ(p_1,p_2,p_3,p_4,p_5,p_6) = -1+(-1)^N/2i^N-1T^aΔ_μΔ_νΔ_ρΔ_σΔ{(f f f)^a c_1 c_2 c_3 c_4∑_i+j+k+l
=N-5κ_ijkl^(1)×(Δ· p_1)(Δ· p_3)^i(Δ· p_4)^j(Δ· p_5)^k(Δ· p_2)^l+1+d_4f^a c_1 c_2 c_3 c_4∑_i+j+k+l
=N-5κ_ijkl^(2)(Δ· p_1)(Δ· p_3)^i(Δ· p_4)^j(Δ· p_5)^k(Δ· p_2)^l+1} + permutations
with all permutations of the gluonic quantities (momenta, Lorentz and colour indices) to be added.
§ CONCLUSIONS
The kernels for parton evolution equations in QCD, i.e. splitting functions or the corresponding anomalous dimensions as their Mellin transforms, can be conveniently determined from the ultraviolet singularities of off-shell Green's functions with insertions of gauge-invariant twist-two spin-N operators.
The renormalization of these OMEs, though, requires the computation of unphysical counterterms for the associated set of alien operators, which effectively describe vertices of two gluons, ghosts or quarks with any number n≥ 0 of additional gluons.
The couplings of these alien operators (EOM and ghost operators) are restricted by the fundamental symmetries, particularly the gBRST relations, which reflect the gauge theory characteristics of QCD.
The set of constraints for these couplings admits explicit solutions, valid for any spin N, which can be obtained using algorithms for symbolic summation to solve the recurrence relations.
A small number of boundary conditions in these solutions can be derived from the computation of the relevant OMEs at specific fixed values of N.
In addition, we have observed that the constraints contain a hierarchy, such that couplings of alien operators with n+1 gluons can be derived from those ones with only n gluons. Thus, the basic ingredients in this bootstrap turn out to be the EOM and ghost operators with the smallest number of additional gluons at a given loop order.
We have provided results for all one-loop alien operator couplings needed in the renormalization of OMEs with physical (gauge-invariant) operators up to four loops, which represents the current frontier in splitting function computations.
This includes in particular the gluon EOM operator with five gluons attached, which is a new result.
The all-N solutions for the couplings that we have obtained can all be related to the fundamental one-loop counterterm η(N) for the EOM and ghost operators of class I involving only two gluons or ghosts.
We have also derived the corresponding Feynman rules and, whenever possible, compared them to those in the literature, finding full agreement.
A Mathematica file with our results for the all-N couplings necessary for the renormalization up to four loops is made available at the preprint server <https://arxiv.org>. We note that the expressions collected in this file have the fundamental one-loop counterterm η(N) divided out.
The symmetries and the structure of the alien operators, that we have exploited in this study, are independent of the order of perturbation theory.
Thus, we expect also analytic all-N solutions beyond one loop for the couplings of the alien operators of class II and higher.
We leave this task to future studies.
§.§ Acknowledgments
The Feynman diagrams in this work are drawn using FeynGame <cit.>.
This work has been supported by the EU's Marie Sklodowska-Curie
grant 101104792, QCDchallenge;
the DFG through the Research Unit FOR 2926,
Next Generation pQCD for Hadron Structure: Preparing for the EIC,
project number 40824754, DFG grant MO 1801/4-2,
the ERC Advanced Grant 101095857 Conformal-EIC;
and by grant K143451 of the National Research, Development and Innovation Fund in Hungary.
JHEP |
http://arxiv.org/abs/2409.02422v1 | 20240904040255 | Mixed Tensor Products, Capelli Berezinians, and Newton's Formula for $\mathfrak{gl}(m|n)$ | [
"Sidarth Erat",
"Arun S. Kannan",
"Shihan Kanungo"
] | math.RT | [
"math.RT"
] |
Mixed Tensor Products, Capelli Berezinians, and Newton's Formula for (m|n)]Mixed Tensor Products, Capelli Berezinians, and Newton's Formula for (m|n)
[2020]17B10, 17B35
S. Erat]Sidarth Erat
Sidarth Erat La Jolla High School La Jolla, CA 92037
[email protected]
A. S. Kannan]Arun S. Kannan
Arun S. Kannan Chicago, IL 60606
[email protected]
S. Kanungo]Shihan Kanungo
Shihan Kanungo Henry M. Gunn High School Palo Alto, CA 94306
[email protected]
§ ABSTRACT
In this paper, we extend the results of Grantcharov and Robitaille in 2021 on mixed tensor products and Capelli determinants to the superalgebra setting. Specifically, we construct a family of superalgebra homomorphisms φ_R : U(𝔤𝔩(m+1|n)) →𝒟'(m|n) ⊗ U(𝔤𝔩(m|n)) for a certain space of differential operators 𝒟'(m|n) indexed by a central element R of 𝒟'(m|n) ⊗ U(𝔤𝔩(m|n)). We then use this homomorphism to determine the image of Gelfand generators of the center of U(𝔤𝔩(m+1|n)). We achieve this by first relating φ_R to the corresponding Harish-Chandra homomorphisms and then proving a super-analog of Newton's formula for 𝔤𝔩(m) relating Capelli generators and Gelfand generators. We also use the homomorphism φ_R to obtain representations of U(𝔤𝔩(m+1|n)) from those of U(𝔤𝔩(m|n)), and find conditions under which these inflations are simple. Finally, we show that for a distinguished central element R_1 in 𝒟'(m|n)⊗ U(𝔤𝔩(m|n)), the kernel of φ_R_1 is the ideal of U(𝔤𝔩(m+1|n)) generated by the first Gelfand invariant G_1.
[
[
September 9, 2024
=====================
§ INTRODUCTION
§.§ The Lie algebra setting
Mixed-tensor type modules, also known as tensor modules or modules of Shen and Larson (see <cit.>), are modules over the tensor product algebra 𝒟'(m) ⊗ U((m)), where 𝒟'(m) is a certain algebra of differential operators on the ring ℂ[t_0^± 1, t_1, …, t_m] and U((m)) is the universal enveloping algebra of the general linear Lie algebra (m). Questions related to mixed-tensor type modules have been well studied over the past 35 years (see <cit.> and references therein). Of interest to us is an algebra homomorphism
U((m+1)) →𝒟'(m) ⊗ U((m))
which enables one to inflate representations from U((m)) to U((m+1)) (see <cit.>). Although this map can be written down algebraically in a straightforward way, there exists a natural geometric interpretation. In particular, it can be thought of as a form of Beilinson-Bernstein localization for parabolic subgroups using a certain maximal parabolic subgroup P ⊂ SL(m+1) such that GL(m) ⊂ P and such that SL(m+1)/P ≅ P^m (see <cit.> and references therein for more details).
In <cit.>, the homomorphism U((m+1)) →𝒟'(m) ⊗ U((m)) is extended to U((m+1)). Moreover, the authors explicitly determine the kernel of this extension as well as the image of the Gelfand generators of the center Z(U((m))) of U((m)). Key ingredients in determining the image of the center are Capelli determinants, Capelli generators, and Newton's formula (for all of these, see <cit.> and <cit.>). Along the way, numerous interesting algebraic identities are also proven; it is expected that these will have implications for the representation theory of tensor modules.
Additionally and importantly to us, the notion of mixed-tensor type modules can be generalized to the setting of Lie superalgebras. Moreover the study of tensor modules in the super setting has received much interest recently (see <cit.> and references therein). This setting is the focus of this paper. In particular, we generalize the results of <cit.> to the super setting for the general Lie superalgebra (m+1|n). Most of the arguments therein generalize, although there are some subtleties that arise from the super setting.
§.§ Results of this paper
First, we construct a superalgebra of differential operators '(m|n) that preserve degree on the superalgebra 𝒞[t_0^± 1, t_1, …, t_m] ⊗Λ_n, where Λ_n is the exterior algebra on n generators with odd parity. In Theorem <ref>, we define via explicit formulas a family of superalgebra homomorphisms
φ_R: U((m+1|n)) →𝒟'(m|n) ⊗ U((m|n))
indexed by central elements R of 𝒟'(m|n) ⊗ U((m|n)). Moreover, for m+1 ≠ n there is a distinguished homomorphism φ_R_1 for a particular central element R_1. The map φ_R_1 is an honest generalization of the map ρ in <cit.> and is “compatible" with a certain projection homomorphism π^g: U((m+1|n)) → U((m+1|n)) and the canonical inclusion in the opposite direction (see Proposition <ref>).
Using the homomorphism φ_R, we can consider inflating irreducible representations of U(𝔤𝔩(m|n)) to U(𝔤𝔩(m+1|n)) by tensoring with a highest-weight module ℱ_a over 𝒟'(m|n) generated by t_0^a (such a module can be defined for any a ∈ℂ) and pulling back via φ_R. A partial criteria for when such representations are irreducible is the combined content of Theorems <ref> and <ref>.
We then take a closer look at the restriction of φ_R to the center of U((m+1|n)). In particular, we show that φ_R interacts nicely with the Harish-Chandra homomorphism in Theorem <ref> and compute the image of the Gelfand generators G_k^(m+1|n) of the center in Theorem <ref>. To do the latter, we use the theory of Yangians. In particular, we define what is meant by a Capelli generator in the super setting (originally due to <cit.>) and then in Theorem <ref> we prove a super Newton's formula (analogous to <cit.>)[Based on Umeda's comments in <cit.>, we expect that our formula is probably already known and seen as a consequence of the “quantum Liouville" formula for Yangians, but we could not find the result nor a proof in the literature and therefore include it in the paper here.], which relates the Capelli generators to the Gelfand generators using what we call a Capelli Berezinian. Then, by computing the images of the Capelli Berezinian under φ_R and applying the super Newton's formula, one can compute the images of Gelfand generators. An alternative, more computational approach is given in Appendix <ref>.
Finally, we show that the kernel of φ_R_1 is generated by the first Gelfand invariant G_1 in Theorem <ref>. All our results, except Theorems <ref> and <ref>, specialize to those of <cit.> when n = 0.
§.§ Organization of this paper
In Section <ref> we review some basics about general linear Lie superalgebras and establish some notation. In Section <ref> we construct the homomorphism φ_R for R ∈ Z(𝒟'(m|n) ⊗ U((m|n))). In Section <ref>, we discuss inflations of representations via the homomorphism φ. In Section <ref>, we relate the restriction of φ to the center of U((m|n)) to the corresponding Harish-Chandra homomorphisms. In Section <ref>, we establish the super Newton's formula and determine the images of the Gelfand generators under φ. Finally, in Section <ref>, we derive the kernel of the map φ, a partial generalization of the results in Section 6 of <cit.>.
In Appendix <ref>, we give the proof that φ_R is a homomorphism, which is a long but straightforward computation. In Appendix <ref>, we give some explicit formulas for the images under φ of a set of homogeneous elements of U((m+1|n)), using a direct computational approach. All the Gelfand generators can be written as sums of elements from this set, so we obtain an alternative proof of Theorem <ref> that does not rely on other sections.
§.§ Future Directions
Let us discuss some further directions of inquiry. First of all, unlike the ordinary setting, we do not know of a geometric interpretation of the map φ but we believe the restriction of this map to U((m+1|n)) should arise from considering a certain “parabolic subgroup” 𝐏⊆ SL(m+1|n). In particular, 𝐏 should be described by the Harish-Chandra pair (𝐏_0 , 𝔭), where the underlying even group 𝐏_0 ⊂𝐏 is given by P as above, and the Lie superalgebra 𝔭 of 𝐏 satisfies 𝔭≅ V ⊕𝔤𝔩(m|n) where V is the tautological representation of 𝔤𝔩(m|n) (see <cit.> for theory of Harish-Chandra pairs for supergroups).
Another question is whether there is some larger structure to the family of maps φ_R. In particular, is there an algebra structure on this set? If so, is there any representation-theoretic interpretation?
Finally, in <cit.> a family of left pseudoinverses are constructed for φ. It would be interesting to see if something analogous can be done in our setting.
§.§ Acknowledgments
This paper is the result of MIT PRIMES-USA, a program that provides high-school students an opportunity to engage in research-level mathematics and in which the second author mentored the first and third authors. The authors would like to thank the MIT PRIMES-USA program and its coordinators for providing the opportunity for this research experience. We would also like to thank Dimitar Grantcharov for suggesting the project idea and for useful discussions. Additionally, we thank Pavel Etingof, Siddhartha Sahi, Tanya Khovanova, and Thomas Rüd for their thoughts and comments.
§ PRELIMINARIES
We shall introduce some background for the Lie superalgebra (m|n) in this section. Throughout the paper we fix nonnegative integers m,n.
§.§ Basic definitions
For a Lie superalgebra 𝔞, by U(𝔞) we denote the universal enveloping superalgebra of 𝔞 and by Z(𝔞) the center of a superalgebra 𝔞. We will always use |·| to denote the parity of a purely homogeneous element, which is 0 if even and 1 if odd (viewed as elements in /2). We will work with general linear Lie superalgebra (m|n), which we define as follows. Let
I {1,…, m,1̅,…,n̅},
Î{0}∪ I
where we impose the total order
0<1 < ⋯ < m < 1̅ < ⋯ < n̅.
We also define a parity function | · | on Î, where the parity of indices without an overline is 0 and that of indices with an overline is 1. Now, let ℂ^m+1|n denote the super vector space with basis {e_0, e_1,…, e_m, e_1̅, …, e_n̅}, where the parity of a basis vector is given by the parity of its index. The general linear Lie superalgebra (m+1|n) is by definition the space of all linear maps on ℂ^m+1|n and can be thought of as super matrices of the block form
[ [ A B; C D ] ],
where the matrix A is (m+1) × (m+1), the matrix B is (m+1) × n, the matrix C is n × (m+1), and the matrix D is n × n. The underlying even Lie algebra is (m+1) ⊕(n) and corresponds to matrices with B = 0 and C = 0, and the odd part corresponds to matrices such that A and D are zero. The Lie bracket on purely homogeneous elements x, y ∈(m+1|n) is given by
[x,y] xy - (-1)^|x||y|yx
where the multiplication is usual matrix multiplication. The supertrace of a matrix of the form in (<ref>)is defined to be
([ [ A B; C D ] ]) = (A) - (D),
where denotes the usual trace of a matrix. The supertrace is a Lie superalgebra homomorphism, so by (m+1|n) we will denote the special linear Lie superalgebra whose elements are the kernel of . Our standard basis for (m+1|n) will be given by the usual elementary matrices {e_ij}_i,j ∈Î. Finally, we will always view (m|n) as the Lie sub-superalgebra of (m+1|n) spanned by {e_ij}_i,j ∈ I. We emphasize this point as the reader may be confused by the notation in what is to come if otherwise unaware.
A basis for (m+1|n) is
{e_ij: i,j∈Î, i j}∪{e_00-(-1)^|i|e_ii: i∈ I}
and we shall view (m|n) as the elements (m+1|n) that also lie in (m|n) under the inclusion above. Some other important subalgebra for (m+1|n) will be _m+1|n, the Cartan subalgebra of diagonal matrices in (m+1|n), and 𝔫^-_m+1|n and 𝔫^+_m+1|n, the subalgebras of strictly lower and upper triangular matrices in (m+1|n), respectively. We define the analogs of these subalgebras for (m|n) in the obvious way.
For a square matrix A and a variable or constant v, the expression A+v should be understood as the sum of A and the scalar matrix of the same size as A having v on the diagonal. Finally, let δ_kl denote the Kronecker delta, which evaluates to 1 if k=l and 0 otherwise.
In Sections <ref> and in Appendices <ref> and <ref>, a and b denote elements of I. In all sections, i and j denote elements of I unless specified otherwise.
§.§ Weights and Root Systems
Let {δ_0, δ_1,…,δ_m,δ_1̅,…, δ_n̅} denote the basis of 𝔥_m+1|n^* dual to the canonical basis {e_00, e_11,…, e_mm,e_1̅1̅,…, e_n̅n̅} of _m+1|n^*. Whenever convenient, for 1 ≤ i ≤ n, we will write
ϵ_i:=δ_i̅.
We will refer to elements of 𝔥_m+1|n^* as weights, and will often write any weight λ = ∑_i ∈Îλ_i δ_i as a tuple (λ_0,λ_1, …, λ_m, λ_1̅, …, λ_n̅). The bilinear form (·, ·): 𝔥_m+1|n×𝔥_m+1|n⟶ℂ given by the supertrace (x, y) str(xy) naturally induces a bilinear form on 𝔥_m+1|n^*, which will also be denoted by (·, ·). For i,j∈Î, we have
(δ_i, δ_j) = (-1)^|i||j|δ_ij.
We call x ∈(m+1|n) a root vector if there exists a nonzero α∈_m+1|n^* if for all h ∈_m+1|n we have [h, x] = α(h)x. The weight α is called a root, and set of all roots is called the root system. It is well known that a root system for (m+1|n) is given by Φ{δ_i - δ_j}_i ≠ j ∈Î, with a root vector corresponding to δ_i - δ_j being e_ij. Let Φ_0̅ and Φ_1̅ be the even and odd roots in Φ, respectively, where a root is even if the corresponding root vector is even (resp. odd). It is easily seen that
Φ_0̅ = {δ_i - δ_j }_1 ≤ i ≠ j ≤ m∪{_i - _j }_1 ≤ i ≠ j ≤ n,
Φ_ = {±(δ_i - _j)}_1 ≤ i ≤ m, 1 ≤ j ≤ n.
A root α∈Φ is said to be isotropic if (α, α) = 0. The set of odd roots coincides with the set of isotropic roots for the general linear Lie superalgebras. Let Φ^+ = {δ_i - δ_j | i < j ∈Î} be a positive system, and restrict this notation to the even and odd roots in the obvious way. Lastly, let 𝒲_m+1|n≅ S_m+1× S_n denote the Weyl group of (m+1|n) with natural action on 𝔥_m+1|n^*, i.e, S_m+1 permutes {δ_0, …, δ_m} and S_n permutes {ϵ_1, …, ϵ_n}.
Furthermore, we can define for any α∈Φ_ the corresponding coroot α^∨∈𝔥_m+1|n such that
⟨λ, α^∨⟩ = 2(λ, α)/(α, α) ∀λ∈𝔥_m+1|n^*.
The simple reflection s_α acts on 𝔥_m+1|n^* as expected: s_α(λ) = λ - ⟨λ, α^∨⟩α. Define the Weyl vector ρ_m+1|n as follows:
ρ_m+1|n∑_i=0^m (m-i+1)δ_i - ∑_j=1^n jϵ_j = (m+1,m,…, 1, -1,-2,…, -n).
Note that our Weyl vector is a shifted version of the standard definition, which is half the sum of the positive even roots minus half the sum of the positive odd roots, but all this changes is that the Harish-Chandra homomorphism gets shifted. A weight λ∈_m+1|n^* is said to be antidominant if ⟨λ + ρ_m+1|n, α^∨⟩∉ℤ_> 0 for all α∈Φ^+_0. An element λ∈_m+1|n^* is said to be typical (relative to Φ^+) if there is no positive isotropic α∈Φ_ such that (λ + ρ_m+1|n, α) = 0.
For a weight λ = (λ_0,…, λ_m, λ_1̅,…, λ_n̅) of (m+1|n), denote by M(λ) and L(λ) the Verma module of highest weight λ and its unique simple quotient, respectively. Here
M(λ) = U((m+1|n))⊗_U(_m+1|n⊕^+_m+1|n)ℂ_λ,
where ℂ_λ is the one-dimensional weight representation of _m+1|n with weight λ extended trivially to ^+_m+1|n.
Finally (and very importantly to avoid confusion later on), to be consistent with our viewing (m|n) as the subalgebra of (m+1|n) spanned by (e_ij)_i,j∈ I, we restrict all of the above definitions to (m|n) in the obvious way so that everything is compatible with the inclusion (m|n) (m+1|n) (with the “extra" index being 0). For instance,
* we will simply write a weight λ∈_m|n^* of (m|n) as an (m+n)-tuple, understanding that λ_0 = 0 under the inclusion _m|n^* 𝔥_m+1|n^*;
* the Weyl group _m|n is S_m × S_n and the Weyl vector ρ_m|n is
ρ_m|n (m,m-1,…, 1, -1,-2,…, -n);
* a Verma module M(λ) and its simple quotient L(λ) can either mean such a (m+1|n)-module with highest weight λ or such a (m|n)-module with highest weight λ (not by restriction). Context will make it clear to which we refer (using a (m+n)-tuple will mean interpret it as a module over (m|n), for instance).
§.§ Superalgebra of Differential Operators
In this subsection, we define the superalgebra '(m|n) of differential operators.
First, we define the superalgebra of polynomials that '(m|n) operates on.
Define [t_0^± 1,t_1,…, t_m] ⊗Λ_n, where Λ_n is the n-generator exterior algebra with odd generators t_1̅,…, t_n̅.
The defining relations of Λ_n are given by
t_it_j = - t_jt_i
for i,j∈{1̅,…,n̅}. This anticommutativity, combined with the fact that [t_0^± 1,t_1,…, t_m] is commutative, means that is supercommutative, i.e. that is x and y are homogeneous, then xy = (-1)^|x||y|yx. A basis for ℱ is given by vectors of the form t_0^k_0t_1^k_1⋯ t_m^k_mt_1̅^k_1̅⋯ t_n̅^k_n̅, where k_0 ∈ℤ, k_1, …, k_m ∈_≥ 0, and k_1̅, … k_n̅∈{0,1}. For convenience, we will drop the tensor symbol in the notation and also freely permute the t_i's in accordance with the supercommutativity rules.
Next, we define the operation ∂∂ t_i. We write ∂_i as shorthand for ∂∂ t_i.
Consider an element t_0^k_0t_1^k_1⋯ t_m^k_mt_1̅^k_1̅⋯ t_n̅^k_n̅∈. To operate by ∂_i, we first move t_i^k_i to the front using supercommutativity. Then replace t_i^k_i by k_it_i^k_i-1. Extend to the whole of by linearity and supercommutativity.
Then, we define the superalgebra of differential operators '(m|n) to be the superalgebra generated by: left multiplication by t_i/t_0 for i∈Î, where we will abuse notation and simply write t_i/t_0 for this operation; and t_0∂_i for i∈Î, which consists of left applying the i'th derivative and then left multiplication by t_0. The parity of these is operators is given by the parity of i ∈Î. We define the element
∑_i ∈Î(t_i/t_0)(t_0∂_i) =∑_i∈Î t_i∂_i ∈'(m|n).
The following lemma collects some identities in '(m|n). The proofs are simple so we omit them.
The superalgebra '(m|n) of differential operators on [t_0^± 1,t_1,…, t_m] ⊗Λ_n generated by t_i/t_0 for i∈ I and t_0∂_i for i∈Î satisfies the following properties:
* We have
t_it_j = (-1)^|i||j|t_jt_i,for i,j∈Î
∂_it_j = (-1)^|i||j|t_j∂_i, for i, j∈Î, and i j
∂_i∂_j = (-1)^|i||j|∂_j∂_i. for i,j∈Î, and i j
* We have
[∂_a, t_a]=∂_at_a - (-1)^|a|t_a∂_a= 1
for a∈Î.
* The element is central in '(m|n) and if p = t_0^e_0t_1^e_1⋯ t_m^e_mt_1̅^e_1̅⋯ t_n̅^e_n̅, then p=( p)p, where p = e_0 + e_1+⋯ + e_m + e_1̅+⋯ + e_n̅.
* Z('(m|n)⊗ U((m|n))) = [] ⊗ Z(U((m|n))).
§ THE HOMOMORPHISM TEXT
In this section, we construct the homomorphism
φ_R: U((m+1|n))→'(m|n)⊗ U((m|n))
that extends the homomorphism ρ in <cit.>.
For any element R in Z('(m|n)⊗ U((m|n)))= [] ⊗ Z(U((m|n))), the correspondence given by
e_ab ↦ t_a∂_b⊗ 1 + 1⊗ e_ab+δ_ab(-1)^|a||b|R, a,b∈ I
e_a0 ↦ t_a∂_0⊗ 1 -∑_j∈ I (-1)^|a||j|t_j/t_0⊗ e_aj, a∈ I
e_0b ↦ t_0∂_b⊗ 1, b∈ I
e_00 ↦ t_0∂_0 ⊗ 1 +R
extends by the universal property to a homomorphism φ_R: U((m+1|n))→'(m|n)⊗ U((m|n)).
We defer the proof to Appendix <ref> as it a long but straightforward verification of commutators. We will write φ as shorthand for φ_R if we are referring to an arbitrary R.
Now, let φ^s denote the restriction of φ_R to U((m+1|n)). The formulas for φ_R give us the following:
The correspondence given by
e_ab ↦ t_a∂_b⊗ 1 + 1⊗ e_ab, a,b∈ I and a b
e_a0 ↦ t_a∂_0⊗ 1 -∑_j∈ I (-1)^|a||j|t_j/t_0⊗ e_aj, a∈ I
e_0b ↦ t_0∂_b⊗ 1,b∈ I
e_00-(-1)^|a|e_aa ↦ (t_0∂_0-(-1)^|a|t_a∂_a)⊗ 1 - 1⊗ (-1)^|a|e_aaa∈ I
extends by the universal property to a homomorphism φ^s: U((m+1|n))→'(m|n)⊗ U((m|n)) (notice this doesn't depend on the choice of R).
Let
G_1^(m|n)∑_i∈ I e_ii,
G_1^(m+1|n)∑_i∈Î e_ii
be the first Gelfand invariants of (m|n) and (m+1|n), respectively (for the full definition of the Gelfand invariants see Section <ref>). Note that they are both central in the corresponding universal enveloping algebras.
Now, let us define the following homomorphisms:
* the natural embedding
ι^s: U((m+1|n))→ U((m+1|n));
* (for m+1 n) the projection
π^g: U((m+1|n))→ U((m+1|n))
defined by
π^g(B)= B-1/m+1-n(B)G_1^(m+1|n)
for B∈(m|n) and the universal property;
* and the map
ι^g: U((m|n))→ U((m+1|n))
defined by ι^g(C) = C-(C)e_00 for C ∈(m|n) and the universal property.
For the following proposition, let us suppose m+1 n and set
R_1 -1/m-n+1(⊗ 1+1⊗ G_1^(m|n)).
From the formulas we deduce that φ_R_1 = φ^s ∘π^g (note that R_1 is a super-analog of the R_1 defined in <cit.>). Let us also define
γ: U((m+1|n))→'(m|n)⊗ U((m+1|n))
by γ (1⊗ι^g) ∘φ^s.
For m+1 n, we have γ = (1 ⊗ι^g)φ^s, π^gι^s =, and
φ^sπ^g = φ_R_1, and all other relations that directly follow from these three; in that sense, the following diagram is commutative (in the category of superalgebras).
[shorten >=1pt,node distance=4cm,on grid,auto]
every state=[fill=rgb:black,1;white,10]
(q_0) U((m+1|n));
(q_1) at (6,0) '(m|n)⊗ U((m|n));
(q_2) at (6,-2.5) '(m|n)⊗ U((m+1|n));
(q_3) at (0,-2.5) U((m+1|n));
(p_3) at (-0.2,-2.25) ;
(p_0) at (-0.2,-0.15) ;
[->,thick,-Stealth[width=5pt, length=10pt]]
(q_0) edge node φ_R_1 (q_1)
edge node π^g (q_3)
(q_1) edge node 1⊗ι^g (q_2)
(q_3) edge node φ^s (q_1)
edge node γ (q_2)
(p_3) edge node ι^s (p_0);
Most of the diagram still holds if instead of φ_R_1 we take any φ_R for any R ∈ Z(𝒟'(m|n) ⊗ U(𝔤𝔩(m|n))). The only change to the diagram is that the π^g arrow is deleted (and the assumption m+1 ≠ n can be relaxed).
§ INFLATING REPRESENTATIONS
In this section we use φ (recall that φ is shorthand for φ_R) to extend a simple representation of U((m|n)) to a representation of U((m+1|n)) and find conditions for when this new representation is simple.
For any a ∈ℂ, define the '(m|n)-module
_a {t_0^a-k_1-⋯ - k_n̅t_1^k_1⋯ t_n^k_m t_1̅^k_1̅⋯ t_n̅^k_n̅| k_1,…, k_m∈_≥ 0, k_1̅,…, k_n̅∈{0,1}}.
This is a generalization of the _a defined in <cit.>, both in the sense that it is defined for superalgebras and in that we do not require a∈.
Note that _a = '(m|n)(t_0^a) (i.e. the set that results when we apply elements of '(m|n) to t_0^a) and that = a on _a. Given a weight λ of (m|n), we write _a⊗ L(λ) (and _a⊗ M(λ)) for both the representation of '(m|n)⊗ U((m|n)) and the representation U((m+1|n)) via φ, and the meaning should be clear from context. We fix a highest weight vector v_λ of L(λ).
Let us now consider inflating the simple module L(λ) using φ. In particular, we investigate when the resulting module is simple.
Let v_λ be the unique highest weight vector for L(λ), up to scaling. The vector t_0^a⊗ v_λ is the only highest weight vector (up to scaling) for _a⊗ L(λ), considered as a U((m+1|n))-module.
For 0<i<j, we have that e_ij acts as t_i∂_j⊗ 1 + 1⊗ e_ij. Now t_i∂_j(t_0^a) = 0 and e_ij acts on v_λ as 0 since i<j, so e_ij acts on t_0^a⊗ v_λ as 0. Also, for each j > 0, e_0j acts on t_0^a⊗ v_λ as t_0∂_j⊗ 1. Now t_0∂_j (t_0^a)=0, so e_0j also acts as 0. It follows that t_0^a⊗ v_λ is a highest weight vector.
Now suppose
w = p_1⊗ w_1+⋯ p_k⊗ w_k
is a highest weight vector, where
p_c=t_0^a-k_1,c-k_2,c-⋯ - k_m,c-k_1̅,c-⋯ - k_n̅,ct_1^k_1,c⋯ t_n^k_m,c t_1̅^k_1̅,c⋯ t_n̅^k_n̅,c,
where k_1,c,…, k_m,c∈_≥ 0 and k_1̅,c,…, k_n̅,c∈{0,1}. Furthermore assume the p_c are distinct and the w_c are nonzero.
Now e_0j (for j∈ I), which acts as t_0∂_j⊗ 1, must act as zero on w. Pick the p_c that is t-lexicographically maximal (i.e. relative to the t_1-degree, t_2-degree, and so on). Then, e_01^k_1,c⋯ e_0n̅^k_n̅,c acts as zero on all p_d⊗ w_d except for p_c⊗ w_c, on which it gives a nonzero result. Thus e_01^k_1,c⋯ e_0n̅^k_n̅,c does not act as zero on w, a contradiction unless p_c = t_0^a, in which case k=1 and we can write w = t_0^a⊗ w.
Then, for i<j, e_ij acts as t_i∂_j⊗ 1 + 1 ⊗ e_ij, and this must act as zero on w. First, note that the t_i∂_j⊗ 1 part acts as zero. Thus we must have e_ij acting on w as 0 for all i<j, i.e. w is a highest weight vector of L(λ), so w is a scalar multiple of v_λ. Thus, t_0^a ⊗ v_λ is the only the highest weight vector (up to scaling) of _a ⊗ L(λ), considered as a U((m+1|n))-module.
We can write R in the form R = ∑_iℰ^i ⊗ z_i for some z_i ∈ Z(U(𝔤𝔩(m|n))) by Lemma <ref>. A direct computation shows that R acts as a scalar on ℱ_a ⊗ L(λ), which we will call r:
R·(∑_j f_j ⊗ v_j) = (∑_iℰ^i ⊗ z_i)(∑_j f_j ⊗ v_j) = ∑_i,j ( f_j)^i f_j ⊗χ_λ(z_i)v_j
= (∑_i a^i χ_λ(z_i) )(∑_j f_j ⊗ v_j) = r(∑_j f_j ⊗ v_j),
where f_j ∈ℱ_a, v_j ∈ L(λ) are arbitrary, and χ_λ is the central character corresponding to λ (see Section <ref>). It follows that the weight for t_0^a⊗ v_λ is
λ (a + r, λ_1 + r, …, λ_m + r, λ_1̅ - r, …, λ_n̅ -r).
Next, we find conditions on λ for t_0^a⊗ v_λ to generate the module _a⊗ L(λ). For i∈ I, define
f(i) ∑_j<i (-1)^|j|,
where the sum runs over j∈ I with j<i (recall the ordering on I defined by 1<2<⋯ < m< 1̅<⋯ < n̅). Then f(1)=0, f(2)=1,…, f(m)=m-1, f(1̅) = m, f(2̅)= m-1,…, f(n̅) = m+1-n.
Let λ be a (m|n)-weight. If a+f(i)-(-1)^|i|λ_i is not a nonnegative integer for all i ∈ I, then _a ⊗ L(λ) is a highest weight module.
Let N be the submodule generated by the highest weight vector t_0^a⊗ v_λ. We first claim that all monomials in ℱ_a, and thus all elements of ℱ_a, create an element of N when tensored with v_λ, the highest weight vector for L(λ). We proceed by induction down on the exponent of t_0. Specifically, assume that T ⊗ v_λ= t_0^a_0S ⊗ v_λ = t_0^a_0t_1^a_1⋯ t_n̅^a_n̅⊗ v_λ∈ N for a_0 + a_1+⋯ + a_n̅= a, and a_i∈ for |i|=0 and a_i∈{0,1} for |i|=1. We claim that (t_i/t_0 ⊗ 1)(T ⊗ v_λ) ∈ N for any i∈ I. To do this we induct on i.
First look at the case i = 1. We have
e_10· (t_0^a_0S⊗ v_λ) = (t_1∂_0 ⊗ 1 - ∑_jt_j/t_0⊗ e_1j)(t_0^a_0S ⊗ v_λ) = a_0t_1t_0^a_0-1S ⊗ v_λ- t_1t_0^a_0-1S ⊗λ_1 v_λ.
which means (t_1/t_0⊗ 1)(T⊗ v_λ) is in N as long as a_0 - λ_1 0. But since a-λ_1 is not a nonnegative integer, and a-a_0 is a nonnegative integer, a_0-λ_1 0.
Thus the base case is proved.
For general i 0, note that
e_i0· (t_0^bS ⊗ v_λ) = (t_i∂_0 ⊗ 1 - ∑_j (-1)^|i||j|t_j/t_0⊗ e_ij)(t_0^bS ⊗ v_λ)
= bt_it_0^b-1S ⊗ v_λ - (-1)^|i|λ_it_it_0^b-1S ⊗ v_λ
=- ∑_j < i (-1)^|i||j| + (|i|+|j|)(|t_0^bS|)t_jt_0^b-1S ⊗ (e_ij· v_λ).
In addition, for 0<j<i, if |j| = 0, we have
e_ij· (t_jt_0^b-1S ⊗ v_λ) = (t_i∂_j ⊗ 1)(t_jt_0^b-1S ⊗ v_λ) + (1⊗ e_ij)(t_jt_0^b-1S ⊗ v_λ)
=(1+a_j) t_it_0^b-1S ⊗ v_λ + (-1)^|i||j| + |j| + (|i|+|j|)(|t_0^bS|) (t_jt_0^b-1S ⊗ (e_ij· v_λ))
= (1+a_j) t_it_0^b-1S ⊗ v_λ + (-1)^|i||j| + (|i|+|j|)(|t_0^bS|) (t_jt_0^b-1S ⊗ (e_ij· v_λ)).
If |j| = 1, then
-e_ij· (t_jt_0^b-1S ⊗ v_λ) = -(t_i∂_j ⊗ 1)(t_jt_0^b-1S ⊗ v_λ) - (1⊗ e_ij)(t_jt_0^b-1S ⊗ v_λ)
=-(1-a_j) t_it_0^b-1S ⊗ v_λ - (-1)^|i||j| + |j| + (|i|+|j|)(|t_0^bS|) (t_jt_0^b-1S ⊗ (e_ij· v_λ))
= -(1-a_j) t_it_0^b-1S ⊗ v_λ + (-1)^|i||j| + (|i|+|j|)(|t_0^bS|) (t_jt_0^b-1S ⊗ (e_ij· v_λ)).
since if a_j = 0, then (t_i∂_j)(t_jt_0^b-1S) = t_it_0^b-1 and if a_j=1, then t_jt_0^b-1S=0 since t_j^2 = 0.
Also, both of these are in N because t_jt_0^b-1S⊗ v_λ∈ N, from the inductive hypothesis.
Adding these equations (for all j<i) to the first gives us that
(b+∑_j<i
|j| = 0(1+a_j) + ∑_j<i
|j| = 1 (a_j-1) - (-1)^|i|λ_i)t_it_0^b-1S⊗ v_λ∈ N.
Now we show that the expression in parentheses (call it A) is nonzero. Note
A = b+∑_j<ia_j + f(i)-(-1)^|i|λ_i = b+∑_j∈ Ia_j +f(i) - (-1)^|i|λ_i- K = a+f(i) - (-1)^|i|λ_i- K,
where K is some nonnegative integer. Since a+f(i)-(-1)^|i|λ_i is not a nonnegative integer, it follows that A 0
so t_it_0^b-1S⊗ v_λ∈ N.
Now, we show that p⊗ (e_ij· v_λ)∈ N, for any p∈_a and i,j∈ I. We have
e_ij· (p⊗ v_λ) = (t_i∂_j)(p)⊗ v_λ + p⊗ (e_ij· v_λ) + δ_ij(-1)^|i||j|R(p⊗ v_λ),
and this element is in N. Now R acts as r (see the discussion preceding (<ref>)), so R(p⊗ v_λ)∈ N and (t_i∂_j)(p)⊗ v_λ∈ N from the first part of the proof. Thus p⊗ (e_ij· v_λ) ∈ N for all p∈_a and i,j∈ I.
Repeating this process multiple times shows that p⊗ (e_i_1 j_1e_i_2j_2⋯ e_i_kj_k· v_λ)∈ N for any k. Since v_λ generates L(λ), this shows p⊗ w∈ N for any p∈_a and w∈ L(λ). Thus N = _a ⊗ L(λ).
Using the following theorem, we can use the previous theorem to find conditions when _a⊗ L(λ)≅ L(λ).
For a weight λ of (m|n), we have _a⊗ L(λ)≅ L(λ) as (m+1|n)-modules if and only if _a⊗ L(λ) is generated by the highest weight vector t_0^a ⊗ v_λ.
First note that by Proposition <ref>, showing _a⊗ L(λ)=L(λ) is equivalent to showing _a⊗ L(λ) is simple. Let N denote the submodule of _a ⊗ L(λ) generated by t_0^a ⊗ v_λ. If N_a⊗ L(λ), it is obvious that _a⊗ L(λ) cannot be simple. Now we prove the “if” direction.
Since N=_a⊗ L(λ), we know t_0^a⊗ v_λ generates _a⊗ L(λ). Let x be a nonzero element of _a⊗ L(λ). Let N(x) be the module generated by x. We will show that t_0^a⊗ v_λ∈ N(x), so N(x)=_a⊗ L(λ), thereby showing _a⊗ L(λ) is simple, since x is an arbitrary element.
Write
x = c_1p_1⊗ w_1+⋯ c_kp_k⊗ w_k,
where
p_c=t_0^a-k_1,c-k_2,c-⋯ - k_m,c-k_1̅,c-⋯ - k_n̅,ct_1^k_1,c⋯ t_n^k_m,c t_1̅^k_1̅,c⋯ t_n̅^k_n̅,c,
where k_1,c,…, k_m,c∈_≥ 0 and k_1̅,c,…, k_n̅,c∈{0,1}, w_c = e_i_1,cj_1,ce_i_2,cj_2,c⋯ e_i_d,cj_d,c· v_λ, and finally c_1,… c_k∈_ 0. We claim that a nonzero element of the form
x' = c_1't_0^a⊗ w_1'+⋯ c_k''t_0^a⊗ w_k''
exists in N(x). For x in the form (<ref>), we define
S(x) =∑_i∈ I∑_ℓ=1^k k_i,ℓ.
Note S(x) is defined uniquely for all x and S(x) is a nonnegative integer. Additionally, S(x')=0 if and only if x' is in the form (<ref>). If S(x) = 0, we are done.
Otherwise, there exists k_j,ℓ>0 for some j,ℓ. Applying e_0j to x (where we have e_0j· x = (t_0∂_j)x), we see that S(e_0j· x)< S(x), so S(·) decreases under this action. Also, note that e_0j· x is nonzero since k_j,ℓ>0. If S(e_0j· x)=0, we are done. Otherwise we can do the same process to e_0j· x, finding e_0j' e_0j· x 0 with S(e_0j' e_0j· x)< S(e_0j· x). If S(e_0j' e_0j· x)=0 we are done. Otherwise we can just keep continuing this process, which must terminate since S is always an integer.
Then it follows that we can find x' ∈ N(x) which can be written as (<ref>). We can then write x' as
x' = t_0^a⊗ w,
where w∈_a⊗ L(λ) and w 0.
Next, we show that if t_0^a⊗ w'∈ N(x), then t_0^a⊗ (e_ij· w')∈ N(x), for i,j∈ I. We have
e_ij· (t_0^a⊗ w') = (t_i∂_j)· (t_0^a⊗ w') + (1⊗ e_ij)· (t_0^a⊗ w') + δ_ij(-1)^|i||j|R· (t_0^a⊗ w')
= ± t_0^a⊗ (e_ij· w') + s(t_0^a⊗ w')
for a constant s∈. Subtracting s(t_0^a⊗ w'), we get that t_0^a⊗ (e_ij· w')∈ N(x).
Let N'(w) be the submodule generated by w in L(λ).
Since x' = t_0^a⊗ w∈ N(x), from the proceeding argument, it follows that t_0^a⊗ N'(w)⊂ N(x). But since L(λ) is simple, it follows that N'(w)=L(λ). In particular, v∈ N'(w), so t_0^a⊗ v_λ∈ N(x). Thus N(x)=_a⊗ L(λ), so _a⊗ L(λ) is simple.
Let λ be a (m|n)-weight. If (λ+ρ_m+1|n, δ_0 - δ_i)∉_>0 for 1≤ i≤ m and (λ+ρ_m+1|n, δ_0 - δ_i)∉_≥ 0 for 1̅≤ i ≤n̅, then _a⊗ L(λ) is simple, which implies _a⊗ L(λ)≅ L(λ).
We show that the conditions given are equivalent to the conditions in Theorem <ref>. Letting ρ = ρ_m|n, note that f(i)+1 = ρ_0 - ρ_i for 1≤ i≤ m and f(i) = ρ_0 + ρ_i for 1̅≤ i≤n̅, where ρ = ρ_m+1|n. It follows that the condition a+f(i)-(-1)^|i|λ_i not being a nonnegative integer is equivalent to the conditions stated in the corollary. Then the result follows by Theorem <ref> and Theorem <ref>.
Let us recall the following fact: if λ is both antidominant and typical, then M(λ) is simple, see for reference <cit.> or <cit.>. Our result will be a consequence of Corollary <ref>.
Let _m|n(a) denote the set of all typical antidominant (m|n)-weights λ such that λ_i-(-1)^|i|a∉ for i∈ I. If λ∈_m|n(a), then _a⊗ M(λ), considered as a (m+1|n)-representation, is isomorphic to M(λ).
Since λ is antidominant and typical, L(λ)=M(λ).
For 1≤ i<j≤ m or 1̅≤ i<j≤n̅, we have (λ+ρ_m+1|n)_i - (λ+ρ_m+1|n)_j = (λ+ρ_m|n)_i-(λ+ρ_m|n)_j∉_>0. Additionally, note that for 1≤ i≤ m, (λ + ρ_m+1|n)_0 - (λ + ρ_m+1|n)_i = a + i - λ_i ∉_>0 since a-λ_i ∉. Thus λ is antidominant.
For 1≤ i≤ m and 1̅≤ j≤n̅, we have
(λ + ρ_m|n,δ_i-δ_j) =(λ+ρ_m+1|n)_i + (λ+ρ_m+1|n)_j = (λ+ρ_m|n)_i + (λ+ρ_m|n)_j 0
since λ is typical. Finally, for 1̅≤ j≤n̅, we have
(λ + ρ_m|n,δ_0-δ_j) =(λ+ρ_m+1|n)_0 + (λ+ρ_m+1|n)_j = a+m+1 + λ_j -j 0
because a+λ_j is not an integer. Therefore λ is typical as well as antidominant.
Thus M(λ)=L(λ) is simple. Then note that λ satisfies the conditions in Corollary <ref>, so the result follows.
Now suppose ξ∈𝔥_m+1|n^* such that (ξ+ρ_m+1|n, α^∨) ∉_> 0 for α∈Φ_^+ and (ξ+ρ_m+1|n, α)∉_≥ 0 for α∈Φ_^+. Note that the first condition is simply that ξ is antidominant and the second implies ξ is typical. Then inductively apply Corollary <ref> to use the modules _a to “build” the representation L(ξ) for ξ∈_m+1|n^*. The conditions on ξ allow us to apply Corollary <ref> iteratively (with R=0) to get a map
U((m+1|n)) →'(m|n)⊗⋯⊗'(0|n)⊗'(n-1|0) ⊗⋯⊗'(0|0)
that gives us a (m+1|n)-representation
ℱ(ξ) _ξ_0⊗_ξ_1⊗⋯⊗_ξ_n̅
that is isomorphic to L(ξ).
(Note that when constructing the chain have to use the obvious isomorphism from U((0|n)) to U((n|0)) given by e_i̅j̅↦ e_ij).
But ξ is antidominant and typical, so ℱ(ξ) is isomorphic to M(ξ) as well. This applies to substantially more general setting than <ref>, and motivates the following conjecture:
There is an isomorphism of U((m+1|n))-modules ℱ(ξ)≅ M(ξ) for all weights ξ∈_m+1|n^*.
Further supporting evidence is that the modules have the same formal supercharacters, which is verified in a manner analogous to the proof of Theorem 4.2, page 6 in <cit.>.
§ IMAGES UNDER HARISH-CHANDRA HOMOMORPHISMS
In this section, we relate the restriction of φ to the center of U((m+1|n)) with the Harish-Chandra homomorphisms.
For M∈{m+1,m}, recall ρ_M|n = (M,…, 1, -1, …, -n). If b=(b_1,…, b_M, b_1̅,…, b_n̅)∈^M|n, then the evaluation homomorphism _b: [l_1,…, l_M,l_1̅,…, l_n̅]→ is defined by
_b(p) p(b_1,…, b_M,b_1̅,…, b_n̅).
Every z∈ Z(U((m|n)) acts on L(λ) as χ_λ(z), where χ_λ(z) = _λ+ρ_m|n(χ_m|n(z)) and χ_m|n: Z(U((m|n)))→[ℓ_1,…,ℓ_m, ℓ_1̅,…, ℓ_n̅]^_m|n is the Harish-Chandra homomorphism (see Theorem 13.1.1 (a) in <cit.>). We similarly define χ_m+1|n: Z(U((m+1|n)))→[ℓ_0,…,ℓ_m, ℓ_1̅,…, ℓ_n̅]^_m+1|n.
Next, we define χ_0,m|n: [] ⊗ Z(U((m|n)))→[ℓ_0] ⊗[ℓ_1,…, ℓ_n̅]^_m|n by
χ_0,m|n(∑_i^i⊗ z_i) = ∑_i ℓ_0^i⊗χ_m|n(z_i).
Set ℓ = χ_0,m|n(R). In the case when R is given by (<ref>), we have
ℓ = -1/n+1( ∑_i ∈Îℓ_i - m(m+1)/2 + n(n+1)/2).
Note that the symmetric superalgebra S(𝔥_m|n) is isomorphic, in a canonical way, to the superalgebra of _m|n-invariant polynomials in [ℓ_1, …, ℓ_n̅]. Let I(𝔥_m|n) be the subalgebra of S(𝔥_m|n) consisting of all _m|n-invariant functions θ on 𝔥_m|n^* such that if α is an isotropic odd root and (λ, α)=0, then θ(λ)=θ(λ + tα) for all t∈. It is known that χ_m|n: Z(U((m|n)))→ I(𝔥_m|n) is an isomorphism (see Theorem 13.1.1 (b) in <cit.>). Note that this means χ_0, m|n, considered as a map from []⊗ Z(U((m|n))) to [ℓ_0]⊗ I(𝔥_m|n) , is an isomorphism as well.
For the remainder of this section, we consider the module _a only when a is an integer. Then note _m|n_m|n(a) = _m|n(0).
The modules
= ⊕_a∈_a and ⊕_λ∈_m|nM(λ)
are faithful over '(m|n) and (m|n), respectively.
We prove that both modules have trivial annihilators.
If z∈ Z(U((m|n))) annihilates ⊕_λ∈_m|nM(λ), then χ_λ(z)=0 for all λ∈_m|n. Since χ_λ(z) = _λ + ρ_m|n(χ_m|n(z)), we have
_λ+ρ_m|n(χ_m|n(z))=0
for all λ∈_m|n. But this can only happen if χ_m|n(z)=0. Thus, we must have z = 0.
Now suppose x∈'(m|n) annihilates and x is nonzero. Write x as the sum of elements of the form
p(t_0,…, t_n̅)·∂_0^b_0⋯∂_n̅^b_n̅
where p(t_0,…, t_n̅) is some nonzero polynomial. Furthermore, assume that each term in the sum has a different (b_0,…, b_n̅). Pick the term that is ∂-lexicographically maximal (i.e. relative to the ∂_0-degree, the ∂_1-degree, and so on). Consider the polynomial t_0^b_0t_1^b_1⋯ t_n̅^b_n̅. Then the leximaximal property of the term we picked guarantees that all the other terms act as zero. Furthermore,
(∂_0^b_0⋯∂_n̅^b_n̅)(t_0^b_0⋯ t_n̅^b_n̅) = c
for a nonzero c∈. Therefore
x· (t_0^b_0⋯ t_n̅^b_n̅) = (p(t_0,…, t_n̅)·∂_0^b_0⋯∂_n̅^b_n̅ )(t_0^b_0⋯ t_n̅^b_n̅)
= c· p(t_0,…, t_n̅) 0.
This contradicts the fact that x annihilates . Therefore has a trivial annihilator.
Define
τ(p(ℓ_0,…, ℓ_m,ℓ_1̅,…ℓ_n̅)) p(ℓ_0 + ℓ + m+1, ℓ_1 + ℓ, …, ℓ_m+ℓ, ℓ_1̅ - ℓ, …, ℓ_n̅-ℓ).
Then it is easy to see that τ is a homomorphism from I(𝔥_m+1|n) to [ℓ_0]⊗ I(𝔥_m|n).
We have that
φ|_Z(U((m+1|n))) = χ_0,m|n^-1τχ_m+1|n.
In particular, φ(Z(U((m+1|n)))) is a subalgebra of []⊗ Z(U((m|n))), and the following diagram is commmutative:
[shorten >=1pt,node distance=4cm,on grid,auto]
every state=[fill=rgb:black,1;white,10]
(q_0) Z(U((m+1|n)));
(q_1) at (6,0) []⊗ Z(U((m|n)));
(q_2) at (6,-2.5) [ℓ_0]⊗ I(𝔥_m|n);
(q_3) at (0,-2.5) I(𝔥_m+1|n);
(p_3) at (-0.2,-2.25) ;
(p_0) at (-0.2,-0.15) ;
[->,thick,-Stealth[width=5pt, length=10pt]]
(q_0) edge node φ (q_1)
edge node χ_m+1|n (q_3)
(q_1) edge node χ_0,m|n (q_2)
(q_3) edge node τ (q_2);
Suppose λ∈_m|n and a∈. Let M(a,λ) = _a⊗ M(λ), considered as a (m+1|n)-module. We show that φ(z) = χ_0,m|n^-1τχ_m+1|n(z) for z∈ Z(U((m+1|n))), as an identity of endomorphisms of M(a,λ). By Lemma <ref>, and by the fact that the tensor product of faithful modules is faithful, the module ⊕_a∈, λ∈_m|n M(a,λ) is faithful over '(m|n)⊗ U((m|n)). Then it will follow that φ = χ_0,m|n^-1τχ_m+1|n(z).
By Corollary <ref>, we have M(a,λ)≅ M(λ). Thus, every z∈ Z(U((m+1|n))) acts on M(a,λ) as χ_λ(z). Now set ξ = χ_0, m|n^-1τχ_m+1|n. Then, ξ(z) acts on M(a,λ) as χ_a, λ(ξ(z)), where χ_a, λ(∑^i ⊗ z'_i) ∑ a^i χ_λ(z'_i), because ^i ⊗ 1 acts on M(a,λ) as a^i. Thus we just need to show χ_a,λ(ξ(z)) =χ_λ(z)
For a polynomial p(t_0,…, t_n̅), we have
_a,λ+ρ_m|nτ(p)
= _a,λ+ρ_m|n p(ℓ_0 + ℓ + m+1, ℓ_1 + ℓ, ⋯, ℓ_m + ℓ, ℓ_1̅-ℓ, …, ℓ_n̅-ℓ)
= p(a+ r+m+1, ℓ_1 + r + m,…, ℓ_m + r +1, ℓ_1̅ - r -1 , …, ℓ_n̅ - r - n)
= _λ + ρ_m+1|n p
(Note that here we set _a, λ+ρ_m|n(p) p(a, λ_1 + m, ..., λ_m + 1, λ_1̅ - 1, ..., λ_1̅ - n) for all p).
As a result, setting p = χ_m+1|n(z), and noting that χ_a, λ(^i ⊗ z') = _a, λ + ρ_m|n(χ_0, m|n(^i ⊗ z')), we have
χ_a,λ(ξ(z))=χ_a,λχ_0, m|n^-1τχ_m+1|n(z) = _λ + ρ_m+1|n(χ_m+1|n(z))= χ_λ(z).
This completes the proof.
§ SUPER VERSION OF NEWTON'S FORMULA AND CAPELLI-TYPE BEREZINIANS
We have now computed the image of the center, so we aim to describe the images of individual generators, as in <cit.>. In <cit.>, the images of the Gelfand generators are computed explicitly using a progression of identities related to Capelli generators, Capelli determinants, and an analogue of Newton's formula for (m) (see <cit.>). However, in the super setting, there is no known analog of these formulas. Additionally, another challenge in the super setting is that the center is not finitely generated, which makes it harder to apply the method in <cit.>. We appeal to the theory of Yangians to find identities for analogs of Capelli determinants (which we call Capelli Berezinians), including a new form of the super Newton's formula, which relates the Capelli Berezinians to Gelfand invariants. We then use this to compute the images of the latter under φ.
§.§ Yangians for gl(m|n)
Let us recall the Yangian Y((m|n)) as defined in <cit.> (also see <cit.>). This is the _2-graded associative algebra over with generators
{T_ij^(r): i,j∈ I, r≥ 1}
and defining relations
[T_ij^(r), T_kl^(s)] = (-1)^|i||j|+|i||k|+|j||k|∑_p=0^min(r,s)- 1 (T_kj^(p)T_il^(r+s-1-p)-T_kj^(r+s-1-p)T_il^(p)).
The _2-grading is given by |T_ij^(r)| = |i|+|j|.
We define the formal power series
T_ij(u) δ_ij+ T_ij^(1)u^-1 + T_ij^(2)u^-2+⋯
and the matrix T(u) [T_ij(u)]∈Mat_m|n(Y((m|n))). We can also identify T(u) with
∑_i,j∈ IT_ij(u)⊗ e_ij(-1)^|j|(|i|+1).
In general, we can identify an operator ∑ A_ij⊗ e_ij(-1)^|j|(|i|+1) in Y((m|n))[[ u^-1]] ⊗^m|n with the matrix [A_ij]. The extra signs are inserted to let the product of two matrices be calculated in the usual way. The Yangian is a Hopf algebra with comultiplication
Δ: T_ij(u)↦∑_k∈ I T_ik(u)⊗ T_kj(u),
antipode S: T(u)↦ T^-1(u), and counit T(u)↦ 1.
We define the power series with coefficients in the Yangian Y((m|n)) of (m|n) given by
Z_h(u) 1+ (T(u+h) - T(u)/h· T^-1(u) )
for any h∈, where (A) denotes the supertrace of a matrix A. The h here has no relation to the h that appears in <cit.>. Define
Z(u) lim_h → m-nZ_h(u).
For m n, Z(u) = Z_m-n(u) while for m = n we have
Z(u) = 1 + ((d/duT(u))T^-1(u)).
Note that the same series Z(u) was defined in <cit.>; however it was not written in terms of this limit construction. The coefficients of the series Z(u) are known to generate the center of Y((m|n)) (see <cit.> and <cit.>).
Let B(u) be the quantum Berezinian of T(u), defined by
B(u) [T_ij(u+(m-n-j))]_i,j=1,… m×[T_ij^*(u-(m+n-j+1))]_i,j=m+1,…, m+n,
where [T_ij^*(u)]= T^*(u)= (T^-1(u))^st and A^st= [A_ji(-1)^|i|(|j|+1)] denotes matrix supertransposition. By the determinant of a matrix X = [X_ab]_a,b=1,…, N with noncommuting entries, we mean the sum
X ∑_σ∈ S_N(σ) X_σ(1),1⋯ X_σ(N),N.
It is shown in <cit.> that
Z(u)=B(u+1)/B(u),
which could be referred to as a “quantum Liouville formula”. Additionally, there is a projection homomorphism π_m|n: Y((m|n))→ U((m|n)) given by
T_ij(u)↦δ_ij + e_ij(-1)^|i|u^-1.
§.§ Super Newton's formula
We can use these tools from Yangians to generalize the Newton's formula for (m) (see <cit.> and Theorem 7.1.3 in <cit.>) to (m|n). Let
((-1)^|i|e_ij)_i,j∈ I
and define
C_m|n(T) [_ij-T-i+1]_i,j = 1,…,m×[(_i̅j̅-T-m+i)^*]_i,j=1,…, n
= ∑_σ∈ S_m(σ) ( - T)_σ(1),1⋯ ( -T-m+1)_σ(m),m
×∑_τ∈ S_n(τ)(( - T -m+1)^-1)^st_m+τ(1),m+1⋯ (( - T-(m-n))^-1)^st_m+τ(n),n
to be the Capelli Berezinian of (m|n).
It follows from the definitions that C_m|n(T) = Q(T) π_m|n(B(-T-(m-n)+1)) for the rational function Q(T) given by
Q(T) =
T(T+1) ⋯ (T+(m-n-1)) for m> n
1 for m=n
(T-(n-m))^-1(T-(n-m-1))^-1⋯ (T-1)^-1 for n>m.
It is well known that the coefficients of π_m|n(B(T)) generate Z(U((m|n))). Hence the coefficients of C_m|n(T) also generate Z(U((m|n))). We call these coefficients the Capelli generators of Z(U((m|n))). Since the coefficients of C_m|n(T) are central, we can consider C_m|n as a function from the algebra of Laurent series in T with coefficients in U((m|n)) to itself.
The Newton's formulas proved in <cit.> and Theorem 7.1.3 of <cit.> relate the Capelli Berezinian (called the Capelli determinant when n=0) to the Gelfand invariants G_k^(m). We generalize these results to (m|n) using the same method as <cit.>.
The Gelfand invariants of (m|n) are given by
G_k^(m|n)∑_i_1,…,i_k ∈Î (-1)^|i_2|+⋯ + |i_k|e_i_1i_2⊗ e_i_2i_3⊗⋯ e_i_ki_1
for k≥ 1, where the sum ranges over all k-tuples (i_1,…, i_k) with terms in I. It is known that the Gelfand invariants G_k for k=0,1, … generate Z(U((m|n))) (see for example <cit.>). We can also write G_k^(m|n) = ^k. Indeed, we have
^k =∑_i∈ I (-1)^|i|(^k)_ii
= ∑_i∈ I (-1)^|i|∑_i_2,…, i_k∈ I_ii_2_i_2i_3⋯_i_ki
=∑_i_1,…, i_k∈ I (-1)^|i_1| + (|i_1|+⋯ + |i_k-1| + |i_k|)e_i_1i_2e_i_2i_3⋯ e_i_ki_1
=∑_i_1,…, i_k∈ I (-1)^|i_2|+⋯ + |i_k| e_i_1i_2e_i_2i_3⋯ e_i_ki_1
= G_k^(m|n).
We have
C_m|n(T-(m-n))/C_m|n(T-(m-n)+1) = 1- ∑_k=0^∞ G_k^(m|n)T^-1-k.
Since C_m|n(T) = Q(T) π_m|n(B(-T-(m-n)+1)), (<ref>) and that G_0^(m|n)=m-n yields that the identity is equivalent to
π_m|n(B(-T+1))/π_m|n(B(-T)) = 1 + 1/-T+(m-n)∑_k=1^∞^k T^-k.
Using (<ref>) and (<ref>), we get that
π_m|n(Z(-u)) = lim_h → m-n1+( u^-1 - (u-h)^-1/h(1- u^-1)^-1)
= lim_h →m-n1 - 1/u(u-h)∑_k=0^∞^k+1 u^-k
=1 - 1/u-(m-n)∑_k=1^∞^k u^-k.
Then the identity follows from (<ref>).
§.§ Images of Central Elements
Next, we find the image of the Capelli Berezinian under the Harish-Chandra homomorphism, which combined with Theorem <ref>, will let us find the image of the Capelli Berezinian under φ.
We have
χ_m|n(C_m|n(-T)) = (T+ℓ_1-m)⋯ (T+ℓ_m-m) (T-ℓ_1̅-m)^-1⋯ (T-ℓ_n̅-m)^-1.
Note C_m|n(T) is the product of two parts, one part summing over σ∈ S_m, and one summing over τ∈ S_n. However, note that in both parts, only the summand corresponding to the identity permutation has a nonzero image under χ_m|n, because in both +T and
( + T)^-1 = T^-1- T^-2+ ^2 T^-3-⋯,
the only elements that act as scalars on a Verma module M(λ) are along the diagonal [This is because of the fact that r_k^(m|n)(i,j) = ^k_ij acts as a scalar on a Verma module M(λ) if and only if i=j. For a definition of r_ij^(m|n), see Appendix <ref>. To prove this, one simply has to consider how r_k^(m|n)(i,j) raises/lowers the weights of the element it is acting on.]. The product of the two terms gives the result.
Define C_φ(T) via the identity C_φ(φ(T)) = φ(C_m+1|n(T)). Since the coefficients of the Capelli determinant are central and φ maps Z(U((m+1|n))) to Z('(m|n)⊗ U((m|n))), the domain of C_φ(T) consists of Laurent series in φ(T) with coefficients in '(m|n)⊗ U((m|n)).
For convenience we will write T for φ(T).
We have
C_φ(T+R) = ( - T)C_m|n(T+1).
Consequently, C_φ( + R)=0.
The identity is equivalent to
φ(C_m+1(-T)) = ( +T +R)C_m|n(-T-R+1).
To prove this, we use use Proposition <ref>, the corresponding version for m+1|n, Theorem <ref>, and that χ_0,n(R)=ℓ.
Define
R_2 = ( + m-n)⊗ 1,
and
A_k(s) = ∑_g=0^k-skg R^g R_2^k-s-g.
We can combine the previous theorem with the Newton's formula to find the images of the Gelfand invariants under φ.
The following formula holds for all positive integers k:
φ(G_k^(m+1|n)) = A_k(1)(⊗ 1) + (m+1-n)R^k + ∑_g=0^k-1kgR^g(1⊗ G_k-g^(m|n))
=-∑_s=2^k (1⊗ G_s-1^(m|n))A_k(s).
Theorem <ref> implies that
C_φ(T-(m-n)-1)/C_φ(T-(m-n)) = (1-1/T-R-R_2)C_m|n(T-R-(m-n))/C_m|n(T-R-(m-n)+1).
Applying φ to the Newton's formula for (m+1|n) gives
C_φ(T-(m-n)-1)/C_φ(T-(m-n)) = 1-∑_k=0^∞φ(G_k^(m+1|n))T^-1-k
= 1- (m-n+1)T^-1 - T^-1∑_k=1^∞φ(G_k^(m+1|n))T^-k.
Analogously we can express the right hand side of (<ref>) as a power series in T^-1. We complete the proof by comparing and computing the coefficients of T^-k on both sides.
§ KERNEL OF VARPHIR1
In this section, we find the kernel of the map φ_R_1. For this section, we write φ=φ_R_1 and G_1 = G_1^(m+1|n) for convenience.
Let (G_1) be the two sided ideal in U((m+1|n)) generated by G_1. Then the kernel of φ is (G_1).
Recall from Proposition <ref> that φ = π^g φ^s. The kernel of π^g is clearly (G_1), so we just need to show that φ^s is injective. We construct a left inverse for φ^s, considered as a Lie superalgebra homomorphism. In particular, this means we view U((m+1|n)) and U((m|n)) as Lie algebras with bracket given by [x,y] = xy-(-1)^|x||y|yx.
Let ”(m|n) be the subalgebra of '(m|n) generated by {t_a∂_b : a,b∈Î, a b}∪{t_0∂_0 - (-1)^|a|t_a∂_a: a∈Î}.
Consider the homomorphism 1⊗ : ”(m|n) ⊗ U((m|n))→”(m|n), where is the counit map of U((m|n)). In particular (1⊗)(p ⊗ x) = 0 for p∈”(m|n) and x∈(m|n) and (1⊗)(p⊗ 1) = p.
Consider the homomorphism (1⊗) ∘φ^s: U((m+1|n)) →”(m|n). Corollary <ref> gives us that this homomorphism is generated by
e_ab ↦ t_a∂_b for a b
e_00-(-1)^|a|e_aa ↦ t_0∂_0- (-1)^|a|t_a∂_a.
We claim that this homomorphism has a left inverse ψ. Letting ”'(m|n) be the algebra generated by {t_a∂_b : a,b∈Î}, we first define ψ': ”'(m|n)→ U((m+1|n)) and then its restriction ψ: ”(m|n)→ U((m+1|n)) will be a left inverse to (1⊗)∘φ^s. The map ψ' is generated by
t_a∂_b ↦ e_ab
for a,b∈Î. To check ψ is a homomorphism, we check ψ'([t_a∂_b, t_c∂_d]) = [e_ab,e_cd] for all a,b,c,d∈Î.
If a d, b c, then we have
[t_a∂_b, t_c∂_d] = t_a∂_bt_c∂_d - (-1)^(|a|+|b|)(|c|+|d|) t_c∂_d t_a∂_b
=(-1)^|b||c|t_a t_c ∂_b∂_d - (-1)^(|a||c| + |b||c| + |a||c|)t_ct_a ∂_d ∂_b
= (-1)^|b||c|t_at_c ∂_b t_d- (-1)^|b||c|t_at_c ∂_b t_d = 0.
Now suppose a d, b=c. Note [e_ab,e_bd] = e_ad. Thus, we show [t_a∂_b, t_b∂_d] = t_a∂_d. Indeed, we have
[t_a∂_b, t_b∂_d] = t_a∂_b t_b ∂_d - (-1)^(|a|+|b|)(|b|+|d|)t_b∂_d t_a∂_b
= t_a∂_b t_b ∂_d -(-1)^|b| t_a t_b∂_b ∂_d
= t_a∂_d.
If a=d, b c, reversing the order gives the previous case.
If a=d, b=c, then [e_ab, e_ba] = e_aa- (-1)^|a|+|b|e_bb. We also have
[t_a∂_b, t_b∂_a] = t_a∂_b t_b∂_a - (-1)^|a|+|b|t_b∂_a t_a∂_b
= t_a (1+(-1)^|b|t_b∂_b)∂_a - (-1)^|a|+|b| t_b∂_a t_a∂_b
= t_a∂_a - (-1)^|a| + |b| t_b∂_b (∂_at_a - (-1)^|a|t_a∂_a)
= t_a∂_a - (-1)^|a|+|b|t_b∂_b.
Thus ψ', hence ψ is a homomorphism. It is easily checked that ψ∘ (1⊗) ∘φ^s=, so φ^s is injective. This completes the proof.
amsalpha
§ PROOF OF THEOREM 3
In this appendix we provide the proof of Theorem <ref>.
Clearly φ maps even elements to even elements and odd elements to odd elements. Thus, we only need to check that for basis elements x and y, we have
[φ(x),φ(y)] = φ([x,y]).
We have multiple cases to check. Here a,b,c,d will always denote elements of I. Before we start verifying the cases, note that since R is central, all brackets with it are zero.
Case 1 [e_ab,e_ba]. 5pt
If a = b then both brackets in (<ref>) are obviously zero.
If a b then [e_ab,e_ba] = e_aa - (-1)^|a|+|b|e_bb. Thus,
φ([e_ab,e_ba]) = (t_a∂_a-(-1)^|a|+|b|t_b∂_b)⊗ 1 + 1⊗(e_aa-(-1)^|a|+|b|e_bb)
Now we show
[t_a∂_b, t_b∂_a] = t_a∂_a - (-1)^|a|+|b|t_b∂_b
for a b.
The left hand side is
t_a∂_bt_b∂_a - (-1)^|a||b|t_b∂_at_a∂_b.
Now ∂_bt_b = 1 + (-1)^|b|t_b∂_b, so t_a∂_bt_b∂_a = t_a∂_a + (-1)^|b|t_at_b∂_b∂_a. Using supercommutativity, this becomes t_a∂_a + (-1)^|b|t_a∂_a t_b∂_b. A similar process with the second term gives us t_b∂_at_a∂_b =t_b∂_b +(-1)^|a|t_a∂_at_b∂_b. Plugging these back into (<ref>), we get
[t_a∂_b, t_b∂_a] = t_a∂_a+ (-1)^|b|t_a∂_a t_b∂_b - (-1)^|a|+|b|t_b∂_b - (-1)^|b|t_a∂_a t_b∂_b
=t_a∂_a - (-1)^|a|+|b|t_b∂_b.
Thus,
[φ(e_ab),φ(e_ba)] = [t_a∂_b⊗ 1 + 1⊗ e_ab,t_b∂_a⊗ 1 + 1⊗ e_ba]
= (t_a∂_a-(-1)^|a|+|b|t_b∂_b)⊗ 1 + 1⊗(e_aa-(-1)^|a|+|b|e_bb)
= φ([e_ab,e_ba]),
verifying (<ref>).
Case 2 [e_ab,e_bc], a c5pt
First, we show
[t_a∂_b, t_b∂_c] = t_a∂_c.
The left hand side is
t_a∂_bt_b∂_c - (-1)^(|a|+|b|)(|b| + |c|)t_b∂_ct_a∂_b.
We have
t_a∂_bt_b∂_c = (-1)^|b||c|t_a∂_b ∂_c t_b
= t_a∂_c ∂_bt_b.
Also,
t_b∂_ct_a∂_b = (-1)^|b||c|∂_c t_bt_a∂_b
= (-1)^|b||c| + |b||a|∂_ct_at_b ∂_b
= (-1)^|b||c| + |b||a| + |a||c|t_a∂_c t_b∂_b.
Thus (-1)^(|a|+|b|)(|b| + |c|)t_b∂_c t_a∂_b = (-1)^|b|^2t_a∂_c t_b∂_b = (-1)^|b|t_a∂_c t_b∂_c. Plugging in to (<ref>), we get
[t_a∂_b, t_b∂_c] = t_a∂_c(∂_bt_b - (-1)^|b|t_b∂_b) = t_a∂_c.
Then, we have
[φ(e_ab), φ(e_bc)] = [t_a∂_b ⊗ 1 + 1 ⊗ e_ab, t_b∂_c ⊗ 1 + 1 ⊗ e_bc]
= t_a∂_c + 1 ⊗ e_ac
= φ(e_ac) = φ([e_ab, e_bc]),
as desired.
Case 3 [e_ab,e_ca], c b5pt
Swapping the order of the two terms gives us Case 2. Note that all signs picked up in the process get canceled.
Case 4 [e_ab,e_cd], c b and a d5pt
First, we show
[t_a∂_b, t_c∂_d] = 0.
The left hand side is
t_a∂_bt_c∂_d - (-1)^(|a|+|b|)(|c|+|d|)t_c∂_dt_a∂_b.
We have
t_c∂_d t_a∂_b = (-1)^|a|(|c|+|d|)t_at_c∂_d ∂_b
= (-1)^|a|(|c|+|d|)+|b|(|c|+|d|)t_a∂_bt_c∂_d
= (-1)^(|a|+|b|)(|c|+|d|)t_a∂_bt_c∂_d.
Plugging back into (<ref>), we get the desired result. Thus, it follows that φ(e_ab) commutes with φ(e_cd). Hence, [φ(e_ab), φ(e_cd)] = 0 = φ([e_ab, e_cd]) as needed.
Case 5 [e_ab, e_b0]5pt
Since R bracketed with anything is zero, we have
[φ(e_ab),φ(e_b0)] = [t_a∂_b⊗ 1 + 1⊗ e_ab,t_b∂_0⊗ 1 -∑_j (-1)^|b||j|t_j/t_0⊗ e_bj]
=[t_a∂_b⊗ 1,t_b∂_0⊗ 1] - (-1)^|b|[t_a∂_b⊗ 1 , t_b/t_0⊗ e_bb]
= - ∑_j a (-1)^|b||j|[1⊗ e_ab, t_j/t_0⊗ e_bj]
=-(-1)^|a||b|[1⊗ e_ab,t_a/t_0⊗ e_ba]
= t_a∂_0 ⊗ 1 - (-1)^|b|t_a/t_0⊗ e_bb +(-1)^|b|t_a/t_0⊗ e_bb
= - ∑_j (-1)^|a||j|t_j/t_0⊗ e_aj
= t_a∂_0 ⊗ 1 - ∑_j (-1)^|a||j|t_j/t_0⊗ e_aj
= φ([e_ab,e_b0])
as desired. We now prove each of the four brackets used here. That [t_a∂_b⊗ 1, t_b∂_0 ⊗ 1] = t_a∂_0⊗ 1 was proved in Case 2. Next, we have
[t_a∂_b⊗ 1 , t_b/t_0⊗ e_bb] = [t_a∂_b, t_b/t_0]⊗ e_bb
since e_bb is even. We have
[t_a∂_b, t_b/t_0] = t_a∂_b t_b/t_0 - (-1)^|b|(|a|+|b|)t_b/t_0t_a∂_b
= t_a/t_0∂_bt_b - (-1)^|b|(|a|+|b|)+ |a||b|t_a/t_0t_b∂_b
= t_a/t_0(∂_bt_b - (-1)^|b|t_b∂_b) = t_a/t_0.
This verifies the second bracket. The third bracket is
[1⊗ e_ab, t_j/t_0⊗ e_bj]
for j a. Note that e_bje_ab = 0. Thus the bracket is
(-1)^(|a|+|b|)|j|t_j/t_0⊗ e_aj.
Multiplying by (-1)^|b||j| gives
(-1)^|a||j|t_j/t_0⊗ e_aj,
as desired. The final bracket is
[1⊗ e_ab,t_a/t_0⊗ e_ba] = (-1)^|a|(|a|+|b|)t_a/t_0⊗ e_aa - (-1)^(|a|+|b|)(|b|)t_a/t_0⊗ e_bb.
Multiplying by (-1)^|a||b|, we get
(-1)^|a||a|t_a/t_0⊗ e_aa - (-1)^|b|t_a/t_0⊗ e_bb,
as desired.
Case 6 [e_ab, e_0a] 5pt
We show
[t_a∂_b, t_0∂_a] = -(-1)^(|a|+|b|)|a|t_0∂_b.
The left hand side is
t_a∂_bt_0∂_a - (-1)^(|a|+|b|)|a|t_0∂_at_a∂_b.
We have t_a∂_bt_0∂_a = (-1)^|a||b|t_0∂_bt_a∂_a = (-1)^|a|(-1)^(|a|+|b|)|a|t_0∂_b t_a∂_a. Also t_0∂_at_a∂_b = t_0∂_b∂_at_a. Plugging these back into (<ref>), we get (<ref>). Then, we have
[φ(e_ab), φ(e_0a)] = [t_a∂_b ⊗ 1 + 1 ⊗ e_ab, t_0∂_a ⊗ 1]
= -t_0∂_b ⊗ 1 = φ(-e_0b)
= φ([e_ab, e_0a]),
as desired.
Case 7 [e_ab, e_00]5pt
We have [e_ab, e_00]=0, so φ([e_ab, e_00])=0. Note that t_0∂_0 commutes with t_a∂_b, so [t_a∂_b, t_0∂_0]=0. Since R is central, we have
[φ(e_ab),φ(e_00)] = [t_a∂_b⊗ 1 + 1⊗ e_ab+δ_ab(-1)^|a||b|R, t_0∂_0⊗ 1 +R]
=0 = φ([e_ab,e_00]).
Case 8 [e_a0,e_b0]5pt
We have
[φ(e_a0), φ(e_b0)] = [t_a∂_0 ⊗ 1 - ∑_j (-1)^|a||j|t_j/t_0⊗ e_aj, t_b∂_0 ⊗ 1 - ∑_j (-1)^|b||j|t_j/t_0⊗ e_bj].
We evaluate each sub-bracket one by one. It is easily checked that [t_a∂_0⊗ 1, t_b∂_0⊗ 1]=0. We have
[t_a∂_0 ⊗ 1, t_j/t_0⊗ e_bj] = t_a∂_0 t_j/t_0⊗ e_bj - (-1)^|a||b|+ (|b| + |j|)|a|t_j/t_0t_a∂_0 ⊗ e_bj
= t_at_j (∂_0 1/t_0)⊗ e_bj - t_at_j(1/t_0∂_0) ⊗ e_bj.
Similar to the identity ∂_0t_0 - t_0∂_0 = 1, we have ∂_01/t_0-1/t_0∂_0 = -1/t_0^2, so the bracket evaluates to -t_at_j. Thus
[t_a∂_0⊗ 1, - ∑_j(-1)^|b||j|t_j/t_0⊗ e_bj] = ∑_j(-1)^|b||j|t_at_j/t_0^2⊗ e_bj.
Next we evaluate
[t_i/t_0⊗ e_ai, t_j/t_0⊗ e_bj]
for i,j∈ I. If i b then e_aie_bj=0. If j a then e_bje_ai=0. Thus
[t_i/t_0⊗ e_ai, t_j/t_0⊗ e_bj] = δ_ib(-1)^|j|(|a|+|b|)t_bt_j/t_0^2⊗ e_aj - δ_aj(-1)^|a|||b| + (|b|+|a)|i|t_at_i/t_0^2⊗ e_bi.
Thus
[-∑_i(-1)^|a||i|t_j/t_0⊗ e_ai, -∑_j(-1)^|b||j|t_j/t_0⊗ e_bj] = (-1)^|a||b|∑_j(-1)^|a||j|t_bt_j/t_0^2⊗ e_aj
= -∑_j (-1)^|b||j|t_at_j/t_0^2⊗ e_bj.
Finally,
[t_j/t_0⊗ e_aj, t_b∂_0⊗ 1] = (-1)^(|a|+|j|)|b| + |j||b| t_bt_j(1/t_0∂_0) ⊗ e_aj - (-1)^|a||b| t_bt_j(∂_0 1/t_0)⊗ e_aj
= (-1)^|a||b|t_bt_j/t_0^2⊗ e_aj.
Thus
[ ∑_j (-1)^|a||j|t_j/t_0⊗ e_aj, t_b∂_0 ⊗ 1] = - (-1)^|a||b|∑_j (-1)^|a||j|t_bt_j/t_0^2⊗ e_aj.
Adding everything back together, we see that everything cancels, so
[φ(e_a0), φ(e_b0)] = 0 = φ([e_a0,e_b0]).
Case 9 [e_a0,e_0b]5pt
Now
[φ(e_a0), φ(e_0b)] = [t_a∂_0 ⊗ 1 - ∑_j (-1)^|a||j|t_j/t_0⊗ e_aj, t_0∂_b ⊗ 1]
= [t_a∂_0, t_0∂_b] ⊗ 1 - ∑_j(-1)^|a||j|[t_j/t_0⊗ e_aj ,t_0∂_b ⊗ 1].
But
[t_j/t_0⊗ e_aj, t_0∂_b ⊗ 1] = (t_j/t_0⊗ e_aj)(t_0∂_b ⊗ 1) - (-1)^|a||b|(t_0∂_b ⊗ 1)(t_j/t_0⊗ e_aj)
= (-1)^(|a|+|j|)|b|(t_j/t_0t_0∂_b) ⊗ e_aj - (-1)^|a||b|(t_0∂_b)(t_j/t_0) ⊗ e_aj
= (-1)^|a||b|[(-1)^|j||b|(t_j∂_b) -(∂_bt_j)] ⊗ e_aj
If j = b the expression in the brackets is
-(∂_b t_b - (-1)^|b|t_b∂_b) = -1.
If j b the expression in the brackets is
(-1)^|j||b|(t_j∂_b)-(∂_bt_j) = (-1)^|j||b|(t_j∂_b)-(-1)^|j||b|(t_j∂_b) = 0.
Plugging these back into the original expression, we get
[φ(e_a0), φ(e_0b)] = [t_a∂_0, t_0∂_b]⊗ 1 + 1⊗ e_ab.
If a b, from Case 2 we have [t_a∂_0,t_0∂_b] = t_a∂_b, so
[φ(e_a0), φ(e_0b)] = t_a∂_b⊗ 1 + 1⊗ e_ab = φ([e_a0,e_0b]).
If a = b, we have
[t_a∂_0, t_0∂_a] = t_a∂_0t_0∂_a - (-1)^|a|t_0∂_a t_a∂_0
=∂_0t_0 t_a∂_a - (-1)^|a| t_0∂_0 ∂_a t_a
= ∂_0t_0 t_a∂_a - (-1)^|a|t_0∂_0 (1+(-1)^|a| t_a∂_a)
= (-1)^|a|t_0∂_0 + (∂_0 t_0 - t_0∂_0)t_a∂_a
= t_a∂_a - (-1)^|a|t_0∂_0.
Thus
[φ(e_a0), φ(e_0a)] = t_a∂_0 ⊗ 1 - (-1)^|a|t_0∂_0 ⊗ 1 + 1⊗ e_aa = φ(e_aa - (-1)^|a|e_00) = φ([e_a0,e_0a]).
Case 10 [e_a0,e_00]5pt
Since R is central,
[φ(e_a0), φ(e_00)] = [t_a∂_0⊗ 1 -∑_j (-1)^|a||j|t_j/t_0⊗ e_aj, t_0∂_0 ⊗ 1]
Now,
[t_a∂_0, t_0∂_0] = (t_a∂_0)(t_0∂_0) - (t_0∂_0)(t_a∂_0) = t_a(∂_0t_0 - t_0∂_0) ∂_0 = t_a∂_0.
Also,
[t_j/t_0⊗ e_aj, t_0∂_0 ⊗ 1] = t_j(∂_0t_0- t_0 ∂_0)1/t_0⊗ e_aj = t_j/t_0⊗ e_aj.
Thus
[φ(e_a0), φ(e_00)] = t_a∂_0 ⊗ 1 - ∑_j (-1)^|a||j|t_j/t_0⊗ e_aj = φ([e_a0,e_00]).
Case 11 [e_0a,e_0b]5pt
We have
φ([e_0a,e_0b])=φ(0)=0.
Also
[φ(e_0a),φ(e_0b)] = [t_0∂_a⊗ 1, t_0∂_b⊗ 1]
= t_0^2 (∂_a∂_b - (-1)^|a||b|∂_b∂_a) ⊗ 1
= 0
Case 12 [e_0b,e_00]5pt
We have [e_0b,e_00]=-e_0b, so φ([e_0b,e_00])=-t_0∂_b ⊗ 1.
Since R is central, we also have
[φ(e_0b),φ(e_00)] = [t_0∂_b⊗ 1, t_0∂_0 ⊗ 1]
= (-t_0∂_0t_0∂_b + t_0∂_bt_0∂_0)⊗ 1
= -( t_0(∂_0t_0-t_0∂_0) ∂_b)⊗ 1
= - t_0∂_b⊗ 1 = φ([e_0b, e_00]).
Case 13 [e_0c, e_ab], c a5pt We have [e_0c,e_ab]=0 and
[φ(e_0c),φ(e_ab)] = [t_0∂_c⊗ 1, t_a∂_b ⊗ 1 + 1⊗ e_ab + δ_ab(-1)^|a||b|R]
= [t_0∂_c, t_a∂_b] ⊗ 1.
Now we have
[t_0∂_c, t_a∂_b] =t_0∂_c t_a∂_b - (-1)^|c|(|a|+|b|)t_a∂_b t_0∂_c
= t_0∂_c t_a∂_b - (-1)^|c|(|a|+|b|)+ |c|(|a|+|b|)t_0∂_c t_a∂_b
= 0.
Thus [φ(e_0c), φ(e_ab)] = 0 = φ([e_0c, e_ab]).
Case 14 [e_ab, e_c0], b c5pt We have [e_ab, e_c0]=0. Also,
[φ(e_ab), φ(e_c0)] = [t_a∂_b⊗ 1 + 1⊗ e_ab + δ_ab(-1)^|a||b|R, t_c∂_0 ⊗ 1- ∑_j(-1)^|c||j|t_j/t_0⊗ e_cj]
=[t_a∂_b, t_c∂_0]⊗ 1-∑_j(-1)^|c||j|[t_a∂_b⊗ 1, t_j/t_0⊗ e_cj]
- ∑_j(-1)^|c||j|[1⊗ e_ab, t_j/t_0⊗ e_cj].
Now we evaluate the brackets. We have
[t_a∂_b, t_c∂_0] = t_a∂_bt_c∂_0 - (-1)^(|a|+|b|)|c| t_c∂_0 t_a∂_b
= (-1)^(|a|+|b|)|c| t_c∂_0 t_a∂_b - (-1)^(|a|+|b|)|c| t_c∂_0 t_a∂_b
= 0.
Next, we have
[t_a∂_b⊗ 1, t_j/t_0⊗ e_cj] = t_a∂_b t_j/t_0⊗ e_cj - (-1)^(|a|+|b|)|c|(t_j/t_0⊗ e_cj)(t_a∂_b⊗ 1)
= 1/t_0t_a∂_bt_j⊗ e_cj - (-1)^(|a|+|b|)|j|1/t_0 t_j t_a∂_b ⊗ 1.
If j b, then this becomes
1/t_0t_a∂_bt_j⊗ e_cj - 1/t_0t_a∂_bt_j⊗ e_cj = 0.
If j = b this becomes
t_a/t_0 (∂_bt_b - (-1)^|b|t_b∂_b)⊗ e_cb = t_a/t_0⊗ e_cb.
Finally, we have
[1⊗ e_ab, t_j/t_0⊗ e_cj] = (-1)^(|a|+|b|)|j|t_j/t_0⊗ (e_abe_cj) - (-1)^(|a|+|b|)|c|t_j/t_0⊗ e_cje_ab
= - (-1)^(|a|+|b|)|c|t_j/t_0⊗ e_cje_ab
Now e_cje_ab is nonzero if and only if j=a. When j=a, the above expression becomes
(-1)^(|a|+|b|)|c|t_a/t_0⊗ e_cb.
Plugging these back into the orginal expression, we get
[φ(e_ab), φ(e_c0)] = - (-1)^|c||b|t_a/t_0⊗ e_cj + (-1)^|c||a|+ (|a|+|b|)|c|t_a/t_0⊗ e_cb = 0 = φ([e_ab, e_c0]).
This completes the verification and finishes the proof.
§ FORMULAS FOR THE IMAGES OF CERTAIN ELEMENTS UNDER VARPHI
For the following section, we will write these Gelfand invariants in terms of the following special elements of U((m|n)). Set r_0^(m|n)(a,b)=δ_ab(-1)^|a||b|. For k≥ 0 and a,b∈ I, let
r_k+1^(m|n)(a,b) = ∑_i_1,…, i_k∈ I (-1)^|i_1|+⋯ + |i_k|e_ai_1⊗ e_i_1i_2⊗⋯⊗ e_i_kb.
Then we have
G_k^(m|n) = ∑_i∈ I r_k^(m|n)(i,i)
for k≥ 1. We compute the images of the elements r_k^(m|n)(a, b) for all k, a, b. The proofs of these formulas are not dependent on the results in previous sections of this paper. In this way, we can obtain an alternative (computational) proof of Theorem <ref>.
First, notice the following identity:
r_k+1^(m+1|n)(a,b) = ∑_i∈Î (-1)^|i|r_k^(m+1|n)(a,i)e_ib.
Indeed, we have
∑_i∈Î (-1)^|i|r_k^(m+1|n)(a,i)e_ib = ∑_i∈Î∑_i_1,…, i_k-1∈Î (-1)^|i_1|+⋯+|i_k-1|+|i| e_ai_1⋯ e_i_k-1i e_ib
= r_k+1^(m+1|n)(a,b).
Define
f_s(a,b)=∑_i∈ I(-1)^|i|t_a∂_i⊗ r_s-1^(m|n)(i,b).
Also note that we have f_1(a,b) = t_a∂_b.
To simplify the statement of the following theorem, we first introduce some more notation. Let
A_k(s) = ∑_g=0^k-skg R^g R_2^k-s-g
B(s) = ∑_i,j∈ I (-1)^|i|(1+|j|)∂_i t_j ⊗ r_s-1^(m|n)(i,j)
D_k(a,b) = ∑_g=0^kkgR^g (1⊗ r_k-g^(m|n)(a,b))
E_k(a) = ∑_g=0^k-1(kgR^g∑_j>0 (-1)^|a||j|t_j/t_0⊗ r_k-g^(m|n)(a,j))
F_k(a,b) = ∑_s=1^k f_s(a,b)A_k(s).
Then, we can find the images of the r_k^(m+1|n).
We have
φ(r_k^(m+1|n)(a,b)) = F_k(a,b) + D_k(a,b)
φ(r_k^(m+1|n)(a,0)) = A_k(1)(t_a∂_0⊗ 1) - E_k(a) - (t_a/t_0⊗ 1)∑_s=2^k A_k(s)B(s)
φ(r_k^(m+1|n)(0,b)) = F_k(0,b)
φ(r_k^(m+1|n)(0,0)) = A_k(1)(t_0∂_0⊗ 1) + R^k -∑_s=2^k A_k(s)B(s).
We prove all four statements simultaneously by induction on k. The base case k = 1 follows from the definition of φ. Suppose the formulas in the statement of the Theorem are true for some positive integer k. Let us prove them for k + 1.
First, consider the value of φ(r_k^(m+1|n)(a, b)):
φ(r_k+1^(m|n)(a,b)) = φ(r_k^(m+1|n)(a,0))φ(e_0b) +∑_i∈ I(-1)^|i|φ(r_k^(m|n)(a,i))φ(e_ib)
= (A_k(1)(t_a∂_0⊗ 1) - E_k(a) - (t_a/t_0⊗ 1)∑_s=2^k A_k(s)B(s))(t_0∂_b⊗ 1)
=+∑_i∈ I(-1)^|i|(F_k(a,i)+D_k(a,i))(t_i∂_b⊗ 1 + 1⊗ e_ib + δ_ib(-1)^|i||b|R)
= A_k(1)(t_a∂_0 t_0∂_b⊗ 1)
=- ∑_g=0^k-1(kgR^g∑_i∈ I(-1)^(|a|+|b|)|i| + |a||b|t_i∂_b⊗ r_k-g^(m|n)(a,i))
=-(t_a/t_0⊗ 1)∑_s=2^kA_k(s)B(s)(t_0∂_b⊗ 1)+ ∑_i∈ I(-1)^|i|∑_s=1^k f_s(a,i)(t_i∂_b⊗ 1)A_k(s)
=+ ∑_i∈ I∑_g=0^k kgR^g (-1)^(|a|+|b|)|i| + |a||b|(t_i∂_b⊗ r_k-g^(m|n)(a,i))
=+ ∑_i∈ I∑_s=1^k((-1)^|i|f_s(a,i)(1⊗ e_ib)A_k(s))
=+∑_i∈ I∑_g=0^kkgR^g (1⊗ (-1)^|i|r_k-g^(m|n)(a,i)e_ib)
=+∑_s=1^k f_s(a,b)A_k(s)R + ∑_g=0^kkgR^g+1(1⊗ r_k-g^(m|n)(a,b)).
Write this sum as X_1 - X_2 - X_3 + X_4 + X_5 + X_6 + X_7 + X_8 + X_9. Then
X_5 - X_2 = R^k(t_a∂_b⊗ 1).
We have
X_3 = (t_a/t_0⊗ 1)∑_s=2^kA_k(s)B(s)(t_0∂_b⊗ 1)
= ∑_s=2^k ∑_i,j∈ I(-1)^|i|(1+|j|)t_a/t_0∂_i t_j ⊗ r_s-1^(m|n)(i,j)⊗ (t_0∂_b⊗ 1)A_k(s)
=∑_s=2^k ∑_i,j∈ I (-1)^|i|+|i||j| + |i||b| + |j||b|t_a∂_i t_j ∂_b⊗ r_s-1^(m|n)(i,j)A_k(s)
and
X_4 = ∑_i∈ I(-1)^|i|∑_s=1^k f_s(a,i)(t_i∂_b⊗ 1)A_k(s)
=∑_s=1^k ∑_i,j∈ I(-1)^|i|+|j|t_a∂_j ⊗ r_s-1^(m|n)(j,i)⊗ (t_i∂_b⊗ 1)A_k(s)
=∑_s=1^k ∑_i,j∈ I(-1)^|j| + |j||i| + |j||b| + |i||b|t_a∂_j t_i ∂_b⊗ r_s-1^(m|n)(j,i)A_k(s)
= ∑_s=1^k ∑_i,j∈ I (-1)^|i|+|i||j| + |i||b| + |j||b|t_a∂_i t_j ∂_b⊗ r_s-1^(m|n)(i,j)A_k(s).
Thus
X_4 - X_3 = (∑_i∈ I(-1)^|i|t_a∂_it_i ∂_b⊗ 1)A_k(1) = ((t_a∂_b ⊗ 1)(R_2)-t_a∂_0t_0∂_b ⊗ 1)A_k(1)
as ∑_i∈Î(-1)^|i|t_a∂_it_i∂_b⊗ 1 = (t_a∂_b⊗ 1)R_2:
∑_i∈Î (-1)^|i|t_a∂_i t_i ∂_b = ∑_i∈Î t_a ((-1)^|i|+t_i∂_i)∂_b
= (t_a∂_b)(m+1-n) +∑_i∈Î
i bt_at_i∂_i∂_b + t_at_b∂_b∂_b
= (t_a∂_b)(+ m+1-n) - (t_a∂_b)(t_b∂_b)+ t_at_b∂_b∂_b.
If b is even, then the last two terms become -t_a∂_b. If b is odd, the last term is zero, and it is easily verified that t_a∂_bt_b∂_b = t_a∂_b, Either way, we get -(t_a∂_b)(t_b∂_b)+t_at_b∂_b∂_b = -t_a∂_b. Plugging this back in proves the desired identity.
Then
X_1 + X_4 - X_3 = (t_a∂_b⊗ 1)A_k(1)R_2 = (t_a∂_b⊗ 1)(∑_g=0^k-1kgR^gR_2^k-g).
Therefore we have
X_1 + X_4 - X_3 + X_5 - X_2 = (t_a∂_b⊗ 1)(∑_g=0^kkgR^g R_2^k-g).
Since ∑_i∈ I(-1)^|i|f_s(a,i)e_ib = f_s+1(a,b), we have
X_6 = ∑_s=1^k f_s+1(a,b)A_k(s) = ∑_s= 2^k+1f_s(a,b)(∑_g=0^k+1-skgR^gR_2^k+1-s-g).
Thus
X_6 + X_1 + X_4 - X_3 + X_5 - X_2 = ∑_s=1^k+1f_s(a,b)(∑_g=0^k+1-skgR^g R_2^k+1-s-g).
But
X_8 = ∑_s=1^k(f_s(a,b)∑_g=1^k-s+1kg-1R^gR_2^k-s-g+1).
Thus
X_8 +X_6 + X_1 + X_4 - X_3 + X_5 - X_2 = ∑_s=1^k+1(f_s(a,b)∑_g=0^k+1-sk+1gR^g R_2^k+1-s-g) = F_k+1(a,b).
Now, since ∑_i∈ I(-1)^|i|r_k-g^(m|n)(a,i)e_ib= r_k+1-g^(m|n)(a,b), we find that
X_7 = ∑_g=0^kkgR^g(1⊗ r_k+1-g^(m|n)(a,b)).
Also
X_9 = ∑_g=1^kkg-1R^g(1⊗ r_k+1-g^(m|n)(a,b))
so
X_7+ X_9 = ∑_g=0^kk+1gR^g(1⊗ r_k+1-g^(m|n)(a,b)) = D_k+1(a, b).
Therefore
X_1 - X_2 - X_3 + X_4 + X_5 + X_6 + X_8 + (X_7 + X_9) = F_k+1(a, b) + D_k+1(a, b).
This completes the proof of the inductive step for φ(r_k+1^(m+1|n)(a,b)).
Next, we consider the value of φ(r_k^(m+1|n)(a,0)). We have
φ(r_k+1^(m+1|n)(a,0)) = φ(r_k^(m+1|n)(a,0))φ(e_00) + ∑_i∈ I(-1)^|i|φ(r_k^(m+1|n)(a,i))φ(e_i0)
= (A_k(1)(t_a∂_0⊗ 1)-E_k(a)- (t_a/t_0⊗ 1)∑_s=2^kA_k(s)B(s))(t_0∂_0⊗ 1 + R)
=+∑_i∈ I(-1)^|i|(F_k(a,i)+D_k(a,i))(t_i∂_0⊗ 1 - ∑_j∈ I(-1)^|i||j|t_j/t_0⊗ e_ij)
= A_k(1)(t_a∂_0t_0∂_0⊗ 1)-∑_g=0^k-1(kg∑_j∈ I(-1)^|a||j|t_j∂_0⊗ r_k-g^(m|n)(a,j)R^g)
= - ∑_s=2^k A_k(s) (∑_i,j∈ I(-1)^|i|(1+|j|)t_a∂_i t_j ∂_0⊗ r_s-1^(m|n)(i,j))
=+A_k(1)R(t_a∂_0⊗ 1) - ∑_g=0^k-1(kgR^g+1∑_j∈ I (-1)^|a||j|t_j/t_0⊗ r_k-g^(m|n)(a,j))
=-(t_a/t_0⊗ 1)∑_s=2^kA_k(s)RB(s)
=+∑_s=1^k(∑_i∈ I(-1)^|i|f_s(a,i)(t_i∂_0⊗ 1))A_k(s)
=+ ∑_g=0^kkgR^g (∑_i∈ I(-1)^|a||i|t_i∂_0⊗ r_k-g^(m|n)(a,i))
=-∑_s=1^k(∑_i,j∈ I(-1)^|i|(1+|j|)f_s(a,i) (t_j/t_0⊗ e_ij))A_k(s)
=-∑_g=0^k kgR^g(∑_i,j∈ I(-1)^|a||j|t_j/t_0⊗ (-1)^|i|r^(m|n)_k-g(a,i)e_ij).
Call this sum X_1 - X_2 - X_3 + X_4 - X_5 - X_6 + X_7 + X_8 - X_9 - X_10. We have
X_8- X_2 = R^k (-1)^|a|t_a∂_0 ⊗ r_0^(m|n)(a,a) = R^k(t_a∂_0 ⊗ 1).
Also
X_5 + X_10 = ∑_g=1^k kg-1R^g(∑_j∈ I (-1)^|a||j|t_j/t_0⊗ r_k+1-g^(m|n)(a,j))
=+∑_g=0^k kgR^g(∑_j∈ I (-1)^|a||j|t_j/t_0⊗ r_k+1-g^(m|n)(a,j))
= ∑_g=0^k k+1gR^g(∑_j∈ I (-1)^|a||j|t_j/t_0⊗ r_k+1-g^(m|n)(a,j))
= E_k+1(a).
Next,
X_9 = ∑_s=1^k(∑_i,j∈ I(-1)^|i|(1+|j|)f_s(a,i) (t_j/t_0⊗ e_ij))A_k(s).
For any positive integer s,
∑_i,j∈ I(-1)^|i|(1+|j|) f_s(a,i)(t_j/t_0⊗ e_ij)
= ∑_i,j∈ I(-1)^|i|(1+|j|)(∑_ℓ∈ I (-1)^|ℓ| t_a∂_ℓ⊗ r_s-1^(m|n)(ℓ, i))(t_j/t_0⊗ e_ij)
= ∑_j,ℓ∈ I (-1)^|ℓ|(1+|j|) t_a∂_ℓt_jt_0^-1⊗(∑_i∈ I(-1)^|i|r_s-1^(m|n)(ℓ, i) e_ij)
= (t_a/t_0⊗ 1)∑_j,ℓ∈ I(-1)^|ℓ|(1+|j|)∂_ℓ t_j⊗ r_s^(m, n)(ℓ, j)
= (t_a/t_0⊗ 1)B(s+1).
Then
X_9 = (t_a/t_0⊗ 1) ∑_s=2^k+1B(s)(∑_g=0^k+1-skgR^g R_2^k+1-s-g).
Also
X_6 = (t_a/t_0⊗ 1) ∑_s=2^kB(s)(∑_g=1^k+1-skg-1R^g R_2^k+1-s-g).
Adding, we get
X_6 + X_9 = (t_a/t_0⊗ 1) ∑_s=2^kA_k+1(s)B(s).
Next, for any positive integer s,
∑_i∈ I (-1)^|i|f_s(a,i)(t_i∂_0⊗ 1) = ∑_i,j∈ I(-1)^|i||j|t_a∂_j t_i∂_0⊗ r_s-1^(m|n)(j,i)
=∑_i,j∈ I(-1)^|i||j|t_a∂_it_j∂_0⊗ r_s-1^(m|n)(i,j).
Thus
X_7 = ∑_s=1^k(∑_i,j∈ I(-1)^|i||j|t_a∂_it_j∂_0⊗ r_s-1^(m|n)(i,j))A_k(s).
On the other hand,
X_3 = ∑_s=2^k(∑_i,j∈ I(-1)^|i||j|t_a∂_it_j∂_0⊗ r_s-1^(m|n)(i,j))A_k(s)
so
X_7 - X_3 = A_k(1)(t_a∂_0⊗ 1)(R_2-t_0∂_0⊗ 1).
Then
X_7 - X_3 + X_1 = A_k(1)R_2(t_a∂_0⊗ 1) = (∑_g=0^k-1kgR^g R_2^k-g)(t_a∂_0⊗ 1).
Thus
X_7 - X_3 + X_1 + X_8 - X_2 = (∑_g=0^kkgR^g R_2^k-g)(t_a∂_0⊗ 1).
Since
X_4 = (∑_g=1^k kg-1R^g R_2^k-g)(t_a∂_0⊗ 1),
we get
X_7 - X_3 + X_1 + X_8 - X_2 + X_4 = (∑_g=0^kk+1gR^g R_2^k-g)(t_a∂_0⊗ 1) = A_k+1(1)(t_a∂_0⊗ 1).
Thus
φ(r_k+1^(m+1|n)(a,0)) = (X_7 - X_3 + X_1 + X_8 - X_2 + X_4) - (X_5 + X_10) - (X_6+X_9)
= A_k+1(1)(t_a∂_0⊗ 1) - E_k+1(a) - (t_a/t_0⊗ 1)∑_s=2^k A_k+1(s)B(s).
This completes the inductive step for r_k+1^(m|n)(a,0).
Next, we consider the value of φ(r_k^(m+1|n)(0, b)). We have
φ(r_k+1^(m+1|n)(0,b))
= φ(r_k^(m+1|n)(0, 0))φ(e_0b) +∑_i∈ I(-1)^|i|φ(r_k^(m+1|n)(0,i))φ(e_ib)
= (A_k(1)(t_0∂_0⊗ 1)+R^k - ∑_s=2^k A_k(s)B(s))(t_0∂_b⊗ 1)
= + ∑_i∈ I(-1)^|i| F_k(0,i)(t_i∂_b ⊗ 1 + 1⊗ e_ib+δ_ib(-1)^|i||b|R)
=A_k(1)(t_0∂_0t_0∂_b⊗ 1) + R^k(t_0∂_b⊗ 1) - ∑_s=2^kA_k(s)B(s)(t_0∂_b⊗ 1)
=+∑_s=1^k∑_i∈ I (-1)^|i|f_s(0,i)(t_i∂_b⊗ 1)A_k(s)
= + ∑_s=1^k∑_i∈ I(-1)^|i|f_s(0,i)(1⊗ e_ib)A_k(s)
+∑_s=1^k f_s(0,b) A_k(s)R.
Write this sum as X_1 +X_2-X_3+X_4+X_5+X_6. Now
X_3 = ∑_s=2^k ∑_i,j∈ I (-1)^|i|(1+|j|)(∂_i t_j ⊗ r_s-1^(m|n)(i,j))(t_0∂_b⊗ 1)A_k(s)
= ∑_s=2^k ∑_i,j∈ I (-1)^|i|(1+|j|) + (|i|+|j|)|b|(t_0∂_i t_j ∂_b ⊗ r_s-1^(m|n)(i,j))A_k(s)
= ∑_s=2^k ∑_i,j∈ I (-1)^(|i|+|j|)(|i|+|b|)(t_0∂_i t_j ∂_b ⊗ r_s-1^(m|n)(i,j))A_k(s).
Also
X_4 = ∑_s=1^k∑_i,j∈ I (-1)^|i|+|j|(t_0∂_j⊗ r^(m|n)(j,i) ⊗ t_i∂_b ⊗ 1)A_k(s)
=∑_s=1^k∑_i,j∈ I (-1)^|i|+|j|+(|i|+|j|)(|i|+|b|) (t_0∂_jt_i∂_b⊗ r_s-1^(m|n)(j,i) )A_k(s).
Now (-1)^|i|+|j|+(|i|+|j|)(|i|+|b|)= (-1)^|j| + |j||i|+|i||b|+|j||b|= (-1)^(|j|+|i|)(|j|+|b|). Then swapping the variable names i,j gives
X_4 = ∑_s=1^k∑_i,j∈ I(-1)^(|i|+|j|)(|i|+|b|)(t_0∂_i t_j∂_b ⊗ r_s-1^(m|n)(i,j))A_k(s).
Then X_4-X_3 simplifies to
(∑_i∈ I (-1)^|i|t_0∂_it_i∂_b⊗ 1)A_k(1).
Then X_1+X_4-X_3 is
(t_0∂_b⊗ 1)∑_g=0^k-1kgR^g R_2^k-g.
Thus
X_2 + X_1 + X_4 - X_3 = (t_0∂_b⊗ 1)(∑_g=0^k kgR^gR_2^k-g).
Since ∑_i∈ I(-1)^|i|f_s(0,i)(1⊗ e_ib) = f_m+1(0,i), we have
X_5 = ∑_s=1^k f_s+1(0,b)A_k(s) = ∑_s=2^k+1 f_s(0,b)∑_g=0^k+1-skgR^gR_2^k+1-s-g.
Thus
X_5 + X_2 + X_1 + X_4 - X_3 = ∑_s=1^k+1 f_s(0,b) ∑_g=0^k+1-skgR^g R_2^k+1-s-g.
Also
X_6 = ∑_s=1^kf_s(0,b)∑_g=1^k+1-skg-1R^gR_2^k+1-s-g.
Adding the previous two equations gives
X_1 + X_2 - X_3 + X_4 + X_5+ X_6 = ∑_s=1^k+1f_s(0,b)A_k+1(s)
which completes the inductive step for the value of φ(r_k+1^(m+1|n)(0,b)).
Finally, we consider the value of φ(r_k^(m+1|n)(0,0)). We have
φ(r_k+1^(m+1|n)(0,0)) = φ(r_k^(m+1|n)(0,0))φ(e_00) + ∑_i∈ I(-1)^|i|φ(r_k^(m+1|n)(0,i)) φ(e_i,0)
=(A_k(1)(t_0∂_0⊗ 1) + R^k - ∑_s=2^k A_k(s)B(s))(t_0∂_0 ⊗ 1+R)
= + ∑_i∈ I(-1)^|i|F_k(0,i)(t_i∂_0 -∑_j∈ I (-1)^|i||j|t_j/t_0⊗ e_ij)
=A_k(1)(t_0∂_0t_0∂_0⊗ 1) + R^k(t_0∂_0⊗ 1) -∑_s=2^k A_k(s)B(s)(t_0∂_0⊗ 1)
= + A_k(1)R (t_0∂_0⊗ 1) + R^k+1-∑_s=2^k(A_k(s)RB(s))
=+∑_i∈ I (-1)^|i|∑_s=1^kf_s(0,i)(t_i∂_0⊗ 1)A_k(s)
=-∑_i∈ I(-1)^|i|∑_s=1^k f_s(0,i)A_k(s)∑_j∈ I(-1)^|i||j|t_j/t_0⊗ e_ij.
Write this as X_1 + X_2 - X_3 + X_4 + X_5 - X_6 + X_7 - X_8.
Since
X_7 = ∑_s=1^k∑_i,j∈ I (-1)^|i|+|j|(t_0∂_j⊗ r_s-1^(m|n)(j,i)⊗ t_i∂_0⊗ 1)A_k(s)
=∑_s=1^k∑_i,j∈ I(-1^|j|(1+|i|))∂_j t_i⊗ r_s-1^(m|n)(j,i) A_k(s)(t_0∂_0⊗ 1)
=∑_s=1^k∑_i,j∈ I(-1^|i|(1+|j|))∂_i t_j⊗ r_s-1^(m|n)(i,j) A_k(s)(t_0∂_0⊗ 1)
we have
X_7-X_3 = ∑_i∈ I(-1)^|i|(∂_i t_i⊗ 1)(t_0∂_0⊗ 1) A_k(1) = (R_2-(t_0∂_0⊗ 1))(t_0∂_0⊗ 1)A_k(1).
Then, expanding out the A_k(s) terms gives
X_7-X_3 + X_1 + X_2 = (∑_g=0^k kgR^gR_2^k-g)(t_0∂_0⊗ 1).
But since X_4 = (∑_g=1^k kg-1R^g R_2^k-g)(t_0∂_0⊗ 1),
X_7 - X_3 + X_1 + X_2 + X_4 = ( ∑_g=0^k k+1gR^g R_2^k-g)(t_0∂_0⊗ 1) = A_k+1(1)(t_0∂_0⊗ 1).
We have
X_8 = ∑_s=1^k(∑_i,j∈ I(-1)^|i|(1+|j|)f_s(0,i)t_j/t_0⊗ e_ij)A_k(s).
For any positive integer s,
∑_i,j∈ I(-1)^|i|(1+|j|)f_s(0,i)t_j/t_0⊗ e_ij = ∑_i,j∈ I(-1)^|i|(1+|j|)∑_ℓ∈ I (-1)^|ℓ|t_0∂_ℓ⊗ r_s-1^(m|n)(ℓ, i) ⊗t_j/t_0⊗ e_ij
=∑_i,j∑_ℓ∈ I(-1)^|ℓ|(1+ |j|)∂_ℓt_j⊗ (-1)^|i|( r_s-1^(m|n)(ℓ, i) e_ij)
= ∑_j,ℓ∈ I(-1)^|ℓ|(1+|j|)∂_ℓt_j⊗ r_s^(m|n)(ℓ, j)
=B(s+1).
Then
X_8 = ∑_s=2^k+1(∑_g=0^k+1-skgR^g R_2^k+1-s-g)B(s).
Also,
X_6 = ∑_s=2^k(∑_g=1^k+1-skg-1R^g R_2^k+1-s-g)B(s)
so
X_6 + X_8 = ∑_s=2^k+1A_k+1(s) B(s).
Finally, X_5 = R^k+1, so combining the above gives
X_1 + X_2 - X_3 + X_4 + X_5 - X_6 + X_7 - X_8 = A_k+1(1)(t_0∂_0⊗ 1) + R^k+1 - ∑_s=2^k+1A_k+1(s)B(s).
This completes the inductive step for φ(r_k+1^(m+1|n)(0,0)), and hence the proof of the theorem.
|
http://arxiv.org/abs/2409.02447v1 | 20240904045808 | FDA-MIMO-Based Integrated Sensing and Communication System with Complex Coefficients Index Modulation for Multi-Target Sensing | [
"Jiangwei Jian",
"Bang Huang",
"Wenkai Jia",
"Mingcheng Fu",
"Wen-Qin Wang",
"Qimao Huang"
] | eess.SP | [
"eess.SP"
] |
FDA-MIMO-Based Integrated Sensing and Communication System with Complex Coefficients Index Modulation for Multi-Target Sensing
Jiangwei Jian, Student Member, IEEE,
Bang Huang,
Wenkai Jia,
Mingcheng Fu,
Wen-Qin Wang, Senior Member, IEEE
and Qimao Huang
This work was supported by National Natural Science Foundation of China 62171092, and in part by the Postdoctoral Program for Innovation Talents under Grant BX20240054. (Corresponding author: Wen-Qin Wang).
Jiangwei Jian, Wenkai Jia, Mingcheng Fu, and Wen-Qin Wang are with School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China. (Email: [email protected]; [email protected]; [email protected]; [email protected]).
Qimao Huang is with School of Physics, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China. (Email: [email protected]).
Bang Huang is with the Computer, Electrical and Mathematical Science and Engineering (CEMSE) division in King Abdullah University of Science and Technology (KAUST), Thuwal, Makkah Province, Saudi Arabia (Email: [email protected]).
September 9, 2024
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The echo signals of frequency diverse array multiple-input multiple-output (FDA-MIMO) feature angle-range coupling, enabling simultaneous discrimination and estimation of multiple targets at different locations. In light of this, based on FDA-MIMO, this paper explores an sensing-centric integrated sensing and communication (ISAC) system for multi-target sensing. On the transmitter side, the complex coefficients index modulation (CCIM) scheme is designed, which carries extra bits by selecting complex coefficients from the coefficient vector. At the sensing receiver, we propose the FDA-MIMO-based spatial spectrum multi-target estimation (SSMTE) method, which first jointly estimates the angle and distance of targets and then estimates the velocities. To reduce the sensing computational complexity, the low-complexity spatial spectrum estimation (LCSSE) algorithm is proposed. LCSSE reduces the complexity without degrading the sensing performance by converting the joint angle-range search into two one-dimensional searches. To address the range ambiguity caused by frequency offset, a frequency offset design criterion (FODC) is proposed. It designs the integer and fractional components of the frequency offset to ensure the ambiguity distance exceeds the maximum sensing range, thereby alleviating parameters pairing errors. Moreover, the closed-form expressions for the bit error rate (BER) tight upper bound and the Cramér-Rao bound (CRB) are derived. Simulation results show that the proposed system excels in multi-target sensing and communications.
Multi-target sensing, frequency diverse array multiple-input multiple-output (FDA-MIMO), complex coefficients index modulation (CCIM), spatial spectrum multi-target estimation (SSMTE), low-complexity spatial spectrum estimation (LCSSE), frequency offset design criterion (FODC).
§ INTRODUCTION
The sharing of spectrum and hardware between radar and communications, termed integrated sensing and communication (ISAC), is emerging as a new trend in next-generation wireless networks <cit.>. On one hand, the ISAC technique enhances system spectrum efficiency and reduces hardware costs. On the other hand, it offers pervasive communication, sensing, and intelligence, fostering various emerging applications like autonomous driving, smart homes, and intelligent communications <cit.>. Depending on differences in application focus, ISAC technology can be categorized into sensing-centric, communication-centric, and communication-sensing trade-off designs <cit.>.
The sensing-centric design regards the targets sensing performance as the primary function, which is the focus of this paper. As early as 1960, Mealey et al. proposed an ISAC system by embedding communication data within radar pulse intervals <cit.>. Subsequently, the application of LFM waveforms in ISAC schemes provided attractive sensing performance due to their larger pulse width-time product <cit.>. However, the phased-array radar used lacked waveform diversity gain. To address this, multiple-input multiple-output (MIMO)-based ISAC systems with waveform diversity degrees of freedom (DoFs) have garnered widespread attention <cit.>. Specifically, <cit.> designed the beampattern for the MIMO radar, where the mainlobe is only used for target sensing, and the level of sidelobe reflects communication symbols. A more promising approach is to combine index modulation (IM) technique with MIMO to increase communication rates. In this regard, <cit.> embedded communication data into spatio-spectral passbands and stopbands, and optimized the beampattern to guarantee the sensing performance. Besides,<cit.> proposed activating partial transmit antennas to carry additional index bits, achieving target perception through sparse array. As a further study, an ISAC system based on frequency agile radar was proposed in <cit.>, which utilized frequency offset selection to convey index bits and proposed the multi-target sensing method. Later, <cit.> extended this work by jointly IM in frequency and spatial dimensions, which further improved the multi-target estimation accuracy via compressed sensing.
On the flip side, the communication-centric and trade-off designs focus on the high communication performance and sensing-communication balance, respectively. In terms of communication-centered design, in <cit.>, the sensing function was attached to the spread spectrum communication systems to realize the dual function. Motivated by this, the orthogonal frequency division multiplexing (OFDM) based ISAC system was proposed in <cit.>, which estimates targets by processing the OFDM echo signals in the fast-slow time domain. Later, MIMO was combined with OFDM, proposed as MIMO-OFDM-based ISAC systems, to improve the communication and sensing performance. These encompass subcarriers allocation and optimization <cit.>, channel interference exploitation <cit.>, precoding <cit.>, waveform optimization <cit.>, and uplink design <cit.>. In the trade-off design, the focus lies in optimizing joint sensing and communication metrics through precoder design. This includes optimizing metrics such as the users sum rate and sensing beampattern optimization <cit.>, Cramér-Rao bound (CRB) and communication signal-to-interference-plus-noise ratio (SINR) optimization <cit.>.
However, aforementioned ISAC systems mainly relied on phased arrays (PAs), whose steering is only angle-dependent without range information. An emerging frequency diverse array (FDA)-MIMO technique extends the DoFs of signal processing to the angle-range dimension by introducing a frequency offset among the adjacent elements <cit.>. Inspired by this, FDA-MIMO radar has been applied in high-resolution target estimation <cit.>, target detection <cit.>, range clutter suppression <cit.>, mainlobe interference suppression <cit.>, and exhibited superior radar performance to the PA-based MIMO in the range dimension. Moreover, FDA-MIMO can also benifit communications. <cit.> described how the angle-range coupling character of FDA was utilized to guarantee communication security for specific location users, which cannot be achieved by the PA. <cit.> employed the inherent frequency offset resources of FDA as IM entities, further enhancing the communication rates and bit error rate (BER) performance.
FDA-MIMO has demonstrated attractive performance in both radar and communications, driving its incorporation into ISAC systems <cit.>. Specifically, <cit.> embedded communication bits into the spreading sequence of each pulse, yielded satisfactory sensing performance. Another approach involved embedding constellation symbols into multiple sub-pulses witin one pulse, enabling simultaneous communication and sensing <cit.>. Further, <cit.> proposed embedding phase modulation symbols into FDA-MIMO radar waveforms and optimized the transmit beamforming to achieve the sensing-communication performance balance. Moreover, <cit.> convey extra bits by permutating the transmit frequency offsets, which improved communication rates and CRB performance. However, the challenges of enhancing communication rates and accurately estimating targets persist. To address this, <cit.> proposed the frequency offset permutation index modulation (FOPIM) scheme, which involved selecting and permutating frequency offsets to carry additional bits, along with the target estimation method. This approach achieved superior sensing performance compared to MIMO-based ISAC systems.
Nevertheless, the aforementioned FDA-MIMO-based ISAC system failed to consider the multi-target sensing, and how to suppress the range periodicity during multi-target estimation remains an open question. Moreover, the FOPIM method only activates partial frequency offsets, leading to the spectrum wastage. Motivated by this, this paper explores the FDA-MIMO-based ISAC system in multi-target scenarios and proposes a complex coefficients index modulation (CCIM) transmission scheme independent of frequency offsets. The main contributions of our work are listed as follows:
1)
In this work, we propose the complex coefficients index modulation (CCIM) scheme to enhance the communication rate. In the CCIM method, each antenna selects a complex coefficient from a normalized complex coefficient vector to transmit additional bits and conveys an independent quadrature amplitude modulation (QAM) symbol. Additionally, the closed-form expressions for the system BER tight upper bound are derived to evaluate the communication performance.
2)
At the sensing receiver, a spatial spectrum multi-target estimation (SSMTE) method is proposed for multi-target sensing. Specifically, within the target-containing range bins, the angles and ranges of targets are jointly estimated in the spatial spectrum of FDA-MIMO. Subsequently, the least squares (LS) is employed to estimate the velocity. The SSMTE method suffers from high complexity due to its angle-range two-dimensional (2-D) search. To tackle this issue, the low-complexity spatial spectrum estimation (LCSSE) approach is proposed, which dramatically reduces the complexity by converting the 2-D angle-range search into two one-dimensional (1-D) searches. Simulation results show that LCSSE and SSMTE methods have similar sensing performance.
3)
The FDA-MIMO exhibits periodic variation in its steering vector with range, resulting in range ambiguity in target estimation. To tackle this issue, the frequency offset design criterion (FODC) is designed in this paper. FODC designs the integer and fractional components of each transmit frequency offset to ensure that the range periodicity of the steering vector exceeds the maximum sensing distance, thereby mitigating range ambiguity in multi-target estimation. Moreover, we derive closed-form expressions for the system CRB performance.
The rest of this paper is organized as follows. Section <ref> proposes the CCIM approach for the FDA-MIMO-based ISAC system. Section <ref> discusses the signal processing of system sensing and communication receivers. Section <ref> analyzes the theoretical performance of the system CRB, complexity and BER. Finally, simulation results are discussed in section <ref>, and the paper is concluded in section <ref>.
Notations: ^T, ^ and ^† stand for the transpose, conjugate and conjugate transpose operations, respectively. 𝐈_N denotes the identity matrix of order N. ⌊·⌋, ! and Γ (·) denote the floor function, factorial and Gamma function, respectively. Tr[·] represents the trace operation and j=√(-1). Re{·} and Im{·} are the real part and the imaginary part operators, respectively. ⊙ and ⊗ stand for the Hadamard product and Kronecker product operations, respectively. ⌊·⌋ _LCM denots the least common multiple operation. * represents the convolution operation. diag() denotes the vector diagonalization operation.
§ SYSTEM MODEL
This paper considers an ISAC system as shown in Fig. <ref>. The FDA-MIMO base station (BS) equips N transmit antennas and M receive antennas for sensing G targets, while serving a communication user equipped U antennas. On the transmitter side of ISAC systems, IM techniques are widely adopted to enhance the system communication rates <cit.>. Although some recent works have combined FDA with IM for ISAC systems, they carried additional information by activating some frequency offsets <cit.>, which resulted in a waste of spectrum resources. To overcome this drawback as well as to further enhance the communication rate, the CCIM method is proposed in this paper.
The proposed CCIM scheme carries extra bits by combining the constellation symbols with elements in a complex coefficient vector. Specifically, we generate a normalized complex coefficient vector c=[ c_1,⋯ ,c_j,⋯ ,c_J ] ^T∈ℂ ^J× 1 with c^†c/J=1, where the elements are shared with the communication user. We define a K pulse repetition interval (PRI) as a coherent processing interval (CPI) and each PRI is of length T. In the kth PRI, the trasmitted constellation symbol vector is denoted as 𝐱_k=[ x_1^k,⋯ ,x_n^k,⋯ ,x_N^k ] ^T∈ℂ ^N× 1, where x_n^k denotes the L-ary unit energy QAM symbol. Then, the CCIM symbol is designed as
x̃_n^k=c^Tς_n^k=c_i_n,kx_n^k,
where
ς_n^k=[ 0,⋯ ,i_n,kthx_n^k,⋯ ,0 ] ^T∈ℂ ^J× 1,
denotes the complex coefficient selection vector of the nth antenna at the kth PRI. c_i_n,k stands for the complex coefficient selected from c, whereas i_n,k representing the index of c_i_n,k in c. From (<ref>) and (<ref>), we can claim that the proposed CCIM method can carry N× (⌊log _2J ⌋ +log _2L) bits in one transmission.
The transmitter, namely FDA-MIMO BS, is considered as a uniform linear array (ULA). The transmit frequency of the nth BS antenna is designed as
f_n=f_c+(n-1) Δ f,
where f_c denotes the common carrier frequency, whereas Δ f denotes the frequecy offset increment. Following the proposed CCIM scheme, the transmit signal of the nth antenna at the kth PRI is expressed as
s_n^k(t) =ϱ (t-kT) x̃_n^ke^j2π [ f_c+(n-1) Δ f ] t,
where k=0,1,⋯ ,K-1. ϱ (t) is the unit energy baseband waveform with the pulse duration T_W, which satisfies the following orthogonality <cit.>:
∫_-∞^∞ϱ (t)ϱ ^†(t-τ )e^j2π( m-m') Δ ftdt={[ 1,m=m'; 0,m m'; ]. ,∀τ.
§ SYSTEM COMMUNICATION AND SENSING FUNCTIONS
In this section, we model the received signals of the communication and sensing receivers, as well as design signal processing methods.
§.§ Sensing receiver
We assume that the locations of G point targets in Fig. <ref> is { (R_1,θ _1) ,⋯ ,(R_g,θ _g) ,⋯ ,(R_G,θ _G) } and the propagation path with the base station is the line-of-sight <cit.>. Then, on the BS side, the received signal of the mth antenna in the kth PRI can be written as
y_m^k( t ) = ∑_n=1^N∑_g=1^Gξ _ge^j2πℱ _gts_n^k(t-τ _m,n,g)
≈ ∑_n=1^N∑_g=1^Gξ _ge^j2πℱ _gtϱ (t-kT-τ _g)x̃_n^ke^j2π (f_c+Δ f_n) t
× e^-j2πΔ f_n2R_g/ce^j2πf_c(n-1) d_1sinθ _g/ce^j2πf_c( m-1) d_2sinθ _g/c,
where Δ f_n=(n-1)Δ f denots the frequency offset of the nth transmit antenna. τ _m,n,g=2R_g-(n-1) d_1sinθ _g-(m-1) d_2sinθ _g/c represents the delay between the nth transmit antenna and the mth receive antenna for the gth target. d_1 and d_2 denote the spacing of neighboring elements in the transmit and receive arrays, respectively. c represents the light speed. ξ _g is the reflection coefficient of the gth target, which absorbs the constant term e^-j2π f_c2R_g/c <cit.>. Note that the approximation ϱ (t-kT-τ _m,n,g) ≈ϱ (t-kT-τ _g) is considered in (<ref>) under the narrow-band assumption. The terms e^j2πΔ f_n(n-1) d_1sinθ _g/c, e^j2πΔ f_n(m-1)d_2sinθ _g/c are tiny enough to be ignored <cit.>. ℱ _g≈2v_g/cf_c and v_g denote the Doppler shift and velocity of the gth target, respectively. Note that, similar to <cit.>, the Doppler spreading from the frequency offset is ignored in this paper.
The sensing receiver structure is shown in Fig. <ref>, which can also be deployed to the communication user without additional design. The received signal is first down-converted with e^-j2π f_ct, and followed by the N-channel demodulator. In the nth channel, the down-converted signal is multipled by e^-j2πΔ f_nt and then match-filtered by ϱ (t). Following this, the filtered signal of the nth channel of the mth receive antenna can be expressed as <cit.>
y_m^k= y_m^k(t) e^-j2π f_cte^-j2πΔ f_nt *ϱ (t)
= ∑_g=1^Gξ _ge^j2πψ _g^kx̃_n^ke^-j2π (n-1) Δ f2R_g/c
× e^j2πf_c(n-1) d_1sinθ _g/ce^j2πf_c(m-1) d_2sinθ _g/c.
where ψ _g^k=ℱ _g (k-1) T <cit.>. Then, the N outputs of the mth receive antenna can be stacked into a vector as
𝐲_m^k =[ y_m,1^k,⋯ ,y_m,n^k,⋯ ,y_m,N^k ] ^T
=∑_g=1^Gξ _ge^j2πψ _g^k𝐱̃_^k⊙𝐚_T (R_g) ⊙𝐚_T(θ _g) e^j2πf_c (m-1) d_2sinθ _g/c,
where
𝐚_T(R_g) =[ 1,⋯,e^jφ _n(R_g),⋯ ,e^jφ _N(R_g)] ^T,
and
𝐚_T(θ _g) =[ 1,⋯,e^jϕ _n(θ _g),⋯ ,e^jϕ _N(θ _g)] ^T,
represent the transmit range and angle steering vectors, respectively. Note that φ _n(R_g) =-2πΔ f_n2R_g/c and ϕ _n( θ _g) =2πf_c(n-1) d_1sinθ _g/c in (<ref>) and (<ref>). 𝐱̃_^k=[ x̃_1^k,x̃_2^k,⋯ ,x̃_N^k] ^T denotes the emitted CCIM symbol vector.
Further, the demodulated outputs of all channels of the M antennas can be written as
𝐲^k =[ 𝐲_1^k^T,⋯ ,𝐲_m^k^T,⋯ ,𝐲_M^k^T ] ^T
=∑_g=1^Gξ _ge^j2πψ _g^k𝐚_R( θ _g ) ⊗[ 𝐱̃^k⊙𝐚_T( R_g ) ⊙𝐚_T( θ _g ) ]+𝐧_k,
where
𝐚_R(θ _g) =[ 1,⋯ ,e^jω _m( θ _g ),⋯ ,e^jω _M(θ _g)] ^T,
stands for the receive steering vector with ω _m(θ _g) =2πf_c(m-1) d_2sinθ _g/c. 𝐧_k∼𝒞𝒩 (0,σ _1^2𝐈_M× N) represents the receive noise vector with σ _1^2 denoting the noise power.
From (<ref>), one can observe that the communication symbol term, 𝐱̃^k, degrades the sensing performance. To address this dilemma, we introduce a sensing compensation vector, as
𝐛_R=1_M⊗𝐱̃_^k,
where 1_M∈ℂ ^M× 1 denotes the all-one vector. Consequently, the compensated receive signal in the kth PRI can be expressed as
𝐲̃_^k =𝐲_^k⊙𝐛_R
=∑_g=1^Gξ _ge^j2πψ _g^k𝐚_R( θ _g ) ⊗ [ 𝐚_T( R_g ) ⊙𝐚_T( θ _g ) ]+𝐧_k.
Equations (<ref>) and (<ref>) show that at the sensing receiver, the interference of communication symbols can be removed by compensating the received data using prior communication information. In other words, the proposed system does not need to consider communication and sensing balance. This is facilitated by the orthogonality between the transmitted waveforms, which allows the receiver to process the data from each demodulation channel separately. Similar methods are reported in <cit.>.
In the sequel, our focus turns to the estimation of range, angle, and velocity. Although conventional MIMO radars are capable of angle, range, and velocity estimations, the issue of how to pair the estimated parameters remains open. The FDA-MIMO can fill this gap well. From (<ref>), one can observe that the range and angle parameters are coupled <cit.>. With this observation, this paper proposes the FDA-MIMO-based SSMTE method.
Specifically, as pointed by <cit.>, the matched filtering (also named as pulse compression) of the signal in (<ref>) will produce peaks in the range bins, where the targets are located. Thus, we can obtain the coarse range estimations of G targets as { r_1,⋯ ,r_g,⋯ ,r_G } <cit.>. r_g=𝒫 _gΔ r represents the principal range of the gth target, where 𝒫 _g and Δ r=c/2B represent the range bin number and bin size, respectively. Note that B=Δ f denotes the bandwidth of the baseband signal. In other words, the error margin of the true range R_g and the coarse estimate r_g for the gth target is within a bin size, i.e., (R_g-r_g) ∈ [0,Δ r].
Then, we estimate the angle of the gth target. Based on the spatial spectrum definition of FDA-MIMO <cit.>, we construct the joint range-angle spatial spectrum estimation as
[ R̂_g,θ̂_g ] =
argmax_[ θ̂_g∈ [-90,90] ,; R̂_g∈ [r_g-Δ r/2,r_g+Δ r/2]; ]1/| 𝐚_TR^†(R̂_g,θ̂_g) 𝐐^-1𝐚_TR(R̂_g,θ̂_g) |,
where R̂_g and θ̂_g denote the estimations of R_g and θ _g. 𝐐=1/K∑_k=1^K𝐲̃_^k(𝐲̃^k) ^† denotes the sampling covariance matrix of the received signal within one CPI <cit.>,
𝐚_TR(R̂_g,θ̂_g) =𝐚_R(θ̂_g) ⊗ [ 𝐚_T(R̂_g) ⊙𝐚_T(θ̂_g) ] ,
denotes the transmit-receive steering vector. 𝐚_R(θ̂_g), 𝐚_T(R̂_g), 𝐚_T(θ̂_g) are calculated by (<ref>), (<ref>), (<ref>), respectively.
To estimate targets velocities, the received date of K PRI are stacked into a matrix, as
𝐘 =[𝐲^1,⋯ ,𝐲^k,⋯ ,𝐲^K]
=𝐀𝐃+𝐍,
where 𝐀=[ 𝐚_TR^(R_1,θ _1) ,𝐚_TR^(R_2,θ _2) ,⋯ ,𝐚_TR(R_G,θ _G) ] ∈ℂ ^MN× G denotes the targets manifold matrix. 𝐃=[ ψ_1(ℱ _1) ,ψ_2(ℱ _2) ,⋯ ,ψ_G(ℱ _G) ] ^T∈ℂ ^G× K is the targets Doppler phase matrix with ψ_g(ℱ _g) =[ e^j2πψ _g^1,e^j2πψ _g^2,...,e^j2πψ _g^K ] ^T. 𝐍=[ 𝐧_1,𝐧_2,⋯ ,𝐧_K ] ∈ℂ ^MN× K represents the receive noise matrix over K snapshots.
Then, the LS method estimation is employed to estimate velocities. With the angle and range estimations in (<ref>), we can write the estimated manifold matrix as 𝐀̂=[ 𝐚_TR( R̂_1,θ̂_1 ) ,𝐚_TR( R̂_2,θ̂_2 ) ,⋯ ,𝐚_TR( R̂_G,θ̂_G ) ]. The targets Doppler phase matrix is estimated as
𝐃̂=𝐃min𝐘-𝐀̂𝐃 ^2.
Solving (<ref>) yields 𝐃̂=( 𝐀̂^T𝐀̂ ) ^-1𝐀̂^T𝐘. Let 𝐃̂_F be the first K-1 columns of 𝐃̂ and 𝐃̂_B be the second to Kth columns of 𝐃̂. One can observe that the Doppler phase matrix is a Vandermont matrix. Therefore, there exists a rotation vector 𝐝(ℱ̂) =[ e^j2πℱ̂_1T,⋯ ,e^j2πℱ̂_gT,⋯ ,e^j2πℱ̂_GT ] ^T∈ℂ ^G× 1 satisfying
𝐃̂_B^T=𝐃̂_F^T𝐄_ℱ̂,
where 𝐄_ℱ̂=diag[ 𝐝( ℱ̂ ) ]. Then, 𝐄_ℱ̂ is calculated as 𝐄_ℱ̂=( 𝐃̂_F𝐃̂_F^T ) ^-1𝐃̂_F𝐃̂_B^T.
Finally, the velocity of the gth target is estimated as
v̂_g=c·angle (κ _g,g)/4f_cπ T,
where κ _g,g denotes the gth diagonal element of 𝐄_ℱ̂. angle( ·) means the phase-taking operation.
§.§ Low-complexity Spatial Spectrum Estimation
Inspecting (<ref>) reveals that the SSMTE method requires the joint search of 2-D spatial spectrum, which suffers from high complexity. To address this problem, we propose the LCSSE algorithm, which estimates the target angle, distance and velocity by the three-time 1-D search, respectively. Specifically, decomposing the denominator term of (<ref>) yields
(R_g,θ _g) =𝐚_TR^†(R_g,θ _g) 𝐐^-1𝐚_TR( R_g,θ _g )
=𝐚_T^†(R_g) 𝐙(θ_g ) 𝐚_T(R_g),
where 𝐙(θ_g) =[ 𝐚_R(θ _g) ⊗diag(𝐚_T(θ _g))]^†𝐐^-1[ 𝐚_R(θ _g) ⊗diag(𝐚_T(θ _g)) ]. Then, 𝐙(θ_g) is chunked as
𝐙(θ _g) =[
z_1(θ _g) 𝐳_2(θ _g)
𝐳_3(θ _g) 𝐳_4(θ _g)
] ,
where z_1(θ _g) =z_1,1(θ _g), 𝐳_2(θ _g) =[ z_1,2(θ _g) ,z_1,3(θ _g) , ⋯ ,z_1,N(θ _g) ], 𝐳_3(θ _g) =𝐳_2(θ _g) ^†, 𝐳_4(θ _g) =[
z_2,2(θ _g) ⋯ z_2,N(θ _g)
⋮ ⋱ ⋮
z_N,2(θ _g) ⋯ z_N,N(θ _g)
].
Let 𝐚_T(R_g) =[1,𝐚_T(R_g) ^T] ^T with 𝐚_T(R_g)=[ e^jφ _2(R_g),e^jφ _3(R_g),⋯ ,e^jφ _N(R_g) ] ^T. The cost function ( R_g,θ _g ) is formulated as
(R_g,θ _g) = z_1(θ_g) +𝐚_T^†(R_g) 𝐳_2^†(θ_g) +𝐳_2(θ_g) 𝐚_T(R_g)
+𝐚_T^†(R_g) 𝐳_4(θ_g) 𝐚_T(R_g) .
The partial derivative of (R_g,θ _g) with respect to 𝐚_T(R_g) yields
∂ (R_g,θ _g)/∂𝐚_T(R_g)=2𝐳_2^†(θ _g) +2𝐳_4(θ _g) 𝐚_T(R_g) .
Let (<ref>) equals to 0, we have 𝐚_T( R_g ) =-𝐳_4(θ _g) ^-1𝐳_2^†(θ _g). Applying 𝐚_T( R_g ) to (<ref>), the angle θ _g is then estimated by
[θ̂_g] =argmax_θ _g∈ [-90,90]1/|z_1(θ _g) -𝐳_2(θ _g) 𝐳_4(θ _g) ^-1𝐳_2^†(θ _g)|.
The targets' angle estimations are obtained by searching for the peaks of (<ref>), named by θ̂:(θ̂_1,⋯ ,θ̂_g,⋯ ,θ̂_G ).
Although the targets' angle parameters have now been obtained through (<ref>), however, their corresponding distances are unknown. To handle this, the angle estimations are brought into (<ref>) one by one. Thereafter, the distance corresponding to θ̂_g is estimated as
[R̂_g] =argmax_R_g∈A_r1/| 𝐚_TR^†(R_g,θ̂_g) 𝐐^-1𝐚_TR(R_g,θ̂_g) | ,
where A_r=[r_g-Δ r/2,r_g+Δ r/2] ∪⋯∪ [r_G-Δ r/2,r_G+Δ r/2] denotes denotes the distance search area.
In summary, in the proposed LCSSE algorithm, the paired angle and distance estimations of the targets are obtained by (<ref>) and (<ref>), which are recorded as (R̂_1,θ̂_1) ,⋯ , (R̂_G,θ̂_G). Finally, the velocity estimations for each paired target are calculated by (<ref>).
§.§ Frequency Offset Design Criterion for Resistance to Range Estimation Ambiguity
Recalling back to (<ref>), an unexpected observation is that the transmit range steering vector is a periodic function of distance with period c/2Δ f, i.e., 𝐚_T(R_g) =𝐚_T(R_g+𝔦c/2Δ f), where 𝔦 denotes a positive integer. This characteristic can lead to errors in the pairing of targets angles and ranges. For example, if the positions of target 1 and target 2 are (R_1,θ _1) and (R_1+c/2Δ f,θ _2), respectively. Then, the range of target 2 may be estimated to be R_1 by (<ref>). Namely, the target 1 may be misestimated as (R_1,θ _2).
To address this problem, an effective way is to design the frequency offset of each transmit antenna, so that the phase does not flip periodically over the desired range. Take this into mind, we propose the frequency offset design criterion for resistance to range estimation ambiguity.
Specifically, we design the frequency offset of the nth antenna as Δ f_n=ε _nΔ f. Note that to guarantee the orthogonality between the transmitted signals, the following condition Δ f_n+1-Δ f_n⩾Δ f should be satisfied. We split ε _n as
ε _n=𝔦 _n+𝔮 _n,
where 𝔦 _n and 𝔮 _n denote the integer and fractional parts of ε _n, respectively. Then, the nth element in 𝐚_T(R_g) is rewritten as
e^-j2πΔ f_n2R_g/c=e^-j2π𝔦 _nΔ f2R_g/ce^-j2π𝔮 _nΔ f2R_g/c.
For the e^-j2π𝔦 _nΔ f2R_g/c term, the distance period is r_𝔦 _n=c/2Δ f. On the other hand, the distance period of the e^-j2π𝔮 _nΔ f2R_g/c term is r_𝔮 _n=c/2Δ f𝔮 _n. Let the distance period of e^-j2πΔ f_n2R_g/c be r_n, then r_n should be a positive integer multiple of r_𝔦 _n and r_𝔮 _n. By guiding of this, we have
r_n=ϖ _nr_𝔦 _n=ς _nr_𝔮 _n,
where ϖ _n and ς _n denote the positive integers to be determined, respectively. In other words, (<ref>) holds at ϖ _n/ς _n=1/𝔮 _n.
Therefore, the period of the transmit range steering vector 𝐚_T(R_g) in the range dimension is equal to the least common multiple (LCM) of the distance periods of all its elements, as
r =⌊ r_1,⋯ ,r_n,⋯ ,r_N ⌋ _LCM
=c/2Δ f×⌊ϖ _1,⋯ ,ϖ _n,⋯ ,ϖ _N ⌋ _LCM.
In other words, r and r_n should remain positive integer multiples, i.e., r/r_n=𝔦.
In practice, the maximum system sensing range is cT/2. Then the frequency offsets should be designed according to (<ref>)-(<ref>) such that r⩾cT/2 to ensure that no distance ambiguity occurs in the interest range.
§.§ Communication receiver
Let ( R̅_u,θ̅_u ) be the location of the communication user. Similar to (<ref>), the received signal of the uth antenna in the kth PRI can be expressed as
y_u^k( t ) = ∑_n=1^Nh_u,ns_n^k( t-τ _u,n)
≈ ∑_n=1^Nh_u,nϱ( t-kT-τ _u ) x̃_n^ke^j2π( f_c+Δ f_n^) t
× e^-j2πΔ f_n2R̅_u/ce^j2πf_c( n-1 ) d_1sinθ̅_u/c,
where τ _u,n=R̅_u-(n-1) d_1sinθ̅_u-( u-1 ) d_3sinθ̅_u/c and h_u,n∼𝒞𝒩 (0,σ _C^2) stand for the delay and the channel coefficient between the nth transmit antenna and the uth receive antenna. d_3 represents the adjacent spacing of the receiver. Note that the term e^-j2π f_c2R̅_u/c is absorbed into the term h_u,n.
The receiver structure of the communication user is shown in Fig. <ref>. Similar to the signal demodulation process of the sensing receiver, the output sampled signals of the nth channel of the uth receive antenna is
y̅_u,n^k = y_u^k( t ) e^-j2π f_cte^-j2πΔ f_nt*ϱ(t) +n_u,n
=x̃_n^kh_u,n+n_u,n,
where n_u,n∼𝒞𝒩 (0,σ _2^2) is the receive noise. Note that the constant term e^-j2π( Δ f_n^2R̅_u/c-f_c(n-1) d_1sinθ̅_u/c) is absorbed into h_u,n.
Inspecting (<ref>) reveals that the baseband signals from the N transmit antennas can be separated at the receiver. Leveraging this property, we can combine all demodulated outputs from the same transmit antenna to improve the system BER performance. Specifically, we stack the outputs of the nth channels of U receive antennas into a vector as
𝐲̅_n^k =[ y̅_1,n^k,⋯ ,y̅_u,n^k,⋯ ,y̅_U,n^k]
=c_^Tς_n^k𝐡_n^+𝐧_u^,
where 𝐡_n^=[h_1,n,⋯ ,h_u,n,⋯ ,h_U,n] ^T stands for the channel vector between the nth transmit antenna and the receiver. 𝐧_u^=[ n_1,n,⋯ ,n_u,n,⋯ ,n_U,n] ^T represents the receive noise vector.
Finally, the maximum likelihood decoder is used to estimate the index and constellation bits emitted by the nth antenna as
[ î_n,k,x̂_n^k] =argi_n,k,x_n^kmin𝐲̅_n^k-c_i_n,k^x_n^k𝐡_n^ ^2,
where î_n,k, x̂_n^k denote the estimations of i_n,k, x_n^k. Note that the transmitted index and constellation bits from all transmit antenna are sequentially estimated by (<ref>) and (<ref>).
§ SYSTEM PERFORMACE ANALYSIS
In this paper, the widely used BER and CRB metrics are considered to evaluate the system communication and sensing performance, respectively. In this section, closed expressions for the system BER and CRB are derived. Moreover, the system sensing complexity is analyzed.
§.§ System CRB Analysis
Within one CPI, the noiseless data matrix for a target located at (θ _g,R_g) can be rewritten as
Π =ξ _g{𝐚_R(θ _g) ⊗[ 𝐚_T(R_g) ⊙𝐚_T(θ _g) ] }ψ_g^T(ℱ _g)
=ξ _g𝐖,
where 𝐖=𝐚_TR(θ _g,R_g) ψ_g^T(ℱ _g) and 𝐚_TR(θ _g,R_g) is denoted in (<ref>).
For convenience, define the unknown parameter vector as
ρ= [Re{ξ _g } ,Im{ξ _g } ,R_g,θ _g,ℱ _g] ^T
.
According to the CRB definition <cit.>, the estimation accuracy lower bound of ρ is given by the diagonal elements of 𝐅^-1. 𝐅∈ℂ ^5× 5 represents the Fisher information matrix of the received signal, whose (x,y)th element is given by <cit.>
F_x,y =2Re{Tr[ ∂Π^†/∂ρ _xΛ^-1∂Π/∂ρ _y] }
=2Re{Tr{∂ (ξ _g𝐖) ^†/∂ρ _xΛ^-1∂ (ξ _g𝐖)/∂ρ _y}}
,
where Λ=σ _1^2I_N× M represents the noise covariance matrix. ρ _x/y denotes the x/yth element of ρ. Substituting (<ref>), (<ref>) into (<ref>), we have
∂ (ξ _g𝐖)/∂Re{ξ _g }=𝐖
,
∂ (ξ _g𝐖)/∂Im{ξ _g }=j𝐖
,
∂(ξ _g𝐖)/∂ R_g=ξ _g( 𝐚_R(θ _g) ⊗[ 𝐚̇_T(R_g) ⊙𝐚_T(θ _g)] ) ψ_g^T(ℱ _g)
,
∂ (ξ _g𝐖)/∂θ _g
= ξ _g{𝐚̇_R(θ _g) ⊗[ 𝐚_T(R_g) ⊙𝐚_T(θ _g) ] . ψ_g^T(ℱ _g)
+. 𝐚_R(θ _g) ⊗[ 𝐚_T(R_g) ⊙𝐚̇_T (θ _g) ] }ψ_g^T(ℱ _g)
,
∂ (ξ _g𝐖)/∂ℱ _g=ξ _g𝐚_R(θ _g) ⊗[ 𝐚_T(R_g) ⊙𝐚_T(θ _g) ] ψ̇_g^T(ℱ _g)
,
where 𝐚̇_T(R_g) =𝐄_T,R𝐚_T(R_g), 𝐚̇_R(θ _g) =𝐄_R,θ𝐚_R(θ _g), 𝐚̇_T(θ _g) =𝐄_T,θ𝐚_T(θ _g) and ψ̇_g (ℱ _g) =𝐄_ℱψ_g(ℱ _g) with
𝐄_T,R=-j4π/cΔ fdiag{ 0,1,⋯ ,N-1 }
,
𝐄_R,θ=j2π f_cd_2/ccosθ _gdiag{ 0,1,⋯ ,M-1 }
,
𝐄_T,θ=j2π f_cd_1/ccosθ _gdiag{ 0,1,⋯ ,N-1 }
,
𝐄_ℱ=j2π Tdiag{ 0,1,⋯ ,K-1 }
.
Then, the Fisher information matrix can be rewritten as (<ref>),
where ζ _R=Λ^-1/2∂𝐖/∂ R_g, ζ _θ=Λ^-1/2∂𝐖/∂θ _g, ζ _ℱ=Λ^-1/2∂𝐖/∂ℱ _g and ζ =Λ^-1/2𝐖.
Further, 𝐅^-1 is represented by
𝐅^-1=1/2[ 𝐅_11 𝐅_12
𝐅_21 𝐅_22
] ^-1=1/2[ × ×
× 𝐃^-1
] ,
where 𝐅_11∈ℂ ^2× 2, 𝐅_12∈ℂ ^2× 3, 𝐅_21∈ℂ ^3× 2 and 𝐅_22∈ℂ ^3× 3 are the chunking matrices in 𝐅. Note that the diagonal elements of 𝐃^-1 contain the estimation information for angle, distance, and velocity. According to the chunked matrix inverse formula <cit.>, 𝐃 is calculated as
𝐃=𝐅_22-𝐅_21𝐅_11^-1𝐅_12=[
D_11 D_12 D_13
D_21 D_22 D_23
D_31 D_32 D_33
] ,
where 𝐃_i,j denotes the i,jth element in 𝐃.
Finally, the CRBs of angle, distance, Doppler frequency estimations are given by
CRB_R_g=1/2( 𝐃)( [
D_22 D_23
D_32 D_33
] ),
CRB_θ _g=1/2( 𝐃)( [
D_11 D_13
D_31 D_33
] ) ,
and
CRB_ℱ=1/2( 𝐃)( [
D_11 D_12
D_21 D_22
] ) ,
respectively, where (·) denotes the determinant operator.
§.§ Complexity Analysis of System Sensing Methods
We analyze the complexity of the proposed SSMTE and LCSSE algorithms by counting the required multiplication operations. The complexity of computing 𝐐^-1 is 𝒪{ K(NM) ^2+(NM) ^3 }. For mathematical convenience, let s_r and s_θ denote the number of distance and angle search steps within a range bin, respectively. Then the angle-distance estimation complexity in (<ref>) is 𝒪{s_rs_θ( N^2M^2+NM+1 )}. For velocity estimation, the computational complexity of 𝐃̂ and 𝐄_ℱ̂ are 𝒪{ G^3+2G^2NM+GMNK } and 𝒪{ 2G^3+2G^2 (K-1) }, respectively. The complexity of computing G target speeds in (<ref>) is 𝒪{ 4G }. Assuming that there are G'⩽ G targets located at different range bins, the complexity of the SSMTE method is calculated by
𝒪{[ KN^2M^2+N^3M^3+s_rs_θG'(N^2M^2+NM+1); +3G^3+2G^2(K+NM-1) +GMNK+4G; ]} .
For the LCSSE method, the computational complexity of 𝐙(θ _g) is 𝒪{ N^2M^2+NM }. Then, The complexity of (<ref>) is 𝒪{ s_θ[(N-1) ^3+(N-1) ^2+(N-1) +1] }, while (<ref>) costs 𝒪{ s_rG' (N^2M^2+NM+1) }. Finally, the complexity of the LCSSE method is computed as
𝒪{[ KN^2M^2+N^3M^3+s_θ [(N-1) ^3+(N-1) ^2+N]; +s_rGG' (N^2M^2+NM+1); +3G^3+2G^2(K+NM-1) +GMNK+4G; ]}.
§.§ System BER Upper Bound Analysis
Referring back to Section <ref>, we know that at the BS side, the information bits carried by one antenna can be categorized into index bits μ _I=log _2J and constellation bits μ _C=log _2L. Therefore, the system average bit error rate (ABER) is formulated as
P_CCIM=∑_n=1^N(P_I,nμ _I+P_C,nμ _C)/N(μ _I+μ _C )
,
where P_I,n, P_C,n denote the average error probability (ABEP) of the index and constellation bits carried by the nth transmit antenna, respectively.
We first derive P_I,n. One can observe that there are C_μ _I^e events that e∈[ 1,μ _I ] bits errors out of μ _I bits. The misestimated index has the same probability of being the remaining J-1 complex coefficients. Thus, the ABEP of the indexed bits can be modeled as
P_I,n =P_IM/(2^μ _I-1) μ _I∑_e=1^μ _IeC_μ _I^e
=2^μ _IP_IM/2 (2^μ _I-1)
,
where P_IM denotes the probability that the selected complex coefficient is incorrectly detected.
The P_IM can be derived by the union bounding technique. Specifically, calling back to (<ref>), the conditional pairwise error probability (PEP) that i_n,k is erroneously detected as i_n,k^' on 𝐡_n can be formulated as
Pr(i_n,k→ i_n,k^'|𝐡_n^)
=Pr( 𝐲̅_n^k-c_i_n,k^x_n^k𝐡_n^ ^2>𝐲̅_n^k-c_i_n,k^'^x_n'^k𝐡_n^ ^2 )
=Q( √(κ/2σ _2^2))
,
where κ = c_i_n,k^'^x_n'^k𝐡_n -c_i_n,k x_n^k𝐡_n ^2.
Since the elements in 𝐡_n follow i.i.d. 𝒞𝒩 (0,σ _C^2), κ in (<ref>) can be rewritten as κ =∑_u=1^2Uϖ _u^2, where ϖ _u^2∼𝒞𝒩 (0,σ _κ^2) with
{[ σ _κ^2=| c_i_n,k^' x_n'^k-c_i_n,k x_n^k|^2σ _C^2/2,if i_n,k^' i_n,k^; σ _κ^2=| x_n'^k-x_n^k|^2| c_i_n,k|^2σ _C^2/2,if i_n,k^'=i_n,k .; ]
.
From (<ref>), one can observe that κ follows the chi-square distribution with 2U degrees of freedom (DoF), whose probability density function (PDF) is written as
f_κ( x ) =1/2^UΓ( U ) ( σ _κ^2) ^Ux^U-1exp( -x/2σ _κ^2)
.
Averaging (<ref>) on κ gives that
Pr(i_n,k→ i_n,k^')=∫_0^∞Q( √(κ/2σ _2^2)) f_κ( x )dx
=. [ P( α) ] ^U∑_u=0^U-1( [ U-1+u; u; ].) [ 1-P( α) ] ^u
,
where
P(α )=1/2( 1-√(α/1+α))
,
with α =σ _κ^2/2σ _2^2.
According to the union bound technique, the tight upper bound of P_IM can be expressed as <cit.>
P_IM⩽1/JL∑_[ i_n,k,i_n,k^'; i_n,k i_n,k^'; ]∑_x_n^k,x_n'^kPr(i_n,k→ i_n,k^')
.
Next, we derive the ABEP of the constellation bits, i.e., P_C,n. We found that P_C,n consists of two parts, one is the index bits are correctly detected but the QAM symbol is incorrectly detected. The other part is the case where the index bits are incorrectly detected, resulting in incorrect detection of the QAM symbol. Take this into mind, we have
P_C,n=J-1/JP_IM+( 1-P_IM) P_QAM
,
where P_QAM denotes the ABEP of the constellation bits when the complex coefficient estimate is wrong, which is derived in the sequel.
We take a G-ary QAM symbol that can be split into two pulse amplitude modulation (PAM) symbols: υ-ary PAM of the I-signal and ω-ary PAM of the Q-signal, L=υ×ω. The conditional probability that the qth bit errors in the I-signal component can be expressed as
P_υ( q | γ _j,l) =2/υ∑_i=0^(1-2^-q) υ -1{ (-1) ^⌊i· 2^q-1/μ⌋.
×( 2^q-1-. ⌊i· 2^q-1/. υ.+1/2⌋. ) . Q( (2i+1)ε√(2γ _j,l)) }
,
where ε =√(3/υ ^2+ω ^2-2) denotes the minimum norm distance between two constellation points. γ _j,l^= c_j^x_l^𝐡_n^ ^2/σ _2^2 denotes the instantaneous total received signal to noise ratio (SNR) on the 𝐡_n when transmit the complex coefficient c_j and the constellation symbol of x_l, the PDF of which is given by <cit.>
f_γ _j,l(x)=1/2^UΓ( U ) ( | c_jx_l|^2σ _C^2/2σ _2^2) ^Ux^U-1exp( -σ _2^2/| c_jx_l|^2σ _C^2x )
.
By averaging (<ref>) on γ _j,l, we have
P_υ^j,l (q)
=2/υ∑_i=0^ (1-2^-q) υ -1{ (-1) ^⌊i· 2^q-1/υ⌋( 2^q-1-. ⌊i· 2^q-1/. υ.+1/2⌋. ) .
×. . [ P( α _j,l' ) ] ^U∑_u=0^U-1( [ U-1+u; u; ].) [ 1-P (α _j,l') ] ^u }
,
where
P( α _j,l' ) =1/2( 1-√(α _j,l'/1+α _j,l'))
,
with α _j,l'=[ (2i+1)ε] ^2| c_j x_l|^2σ _C^2/σ _2^2.
Similarly, the error probability of the qth bit in the ω-ary PAM component can be expressed as
P_ω^j,l(q)
=2/ω∑_i=0^(1-2^-q) ω -1{ (-1) ^⌊i· 2^q-1/ω⌋( 2^q-1-. ⌊i· 2^q-1/ω+1/2⌋. ) .
×. [ P (α _j,l') ] ^U∑_u=0^U-1( [ U-1+u; u; ])[ 1-P(α _j,l') ] ^u }
.
Therefore, with emission complex coefficients of c_i, the ABEP of the constellation symbol x_n is calculated as
P_j,l =1/log _2L[ ∑_q=1^log _2υP_υ^j,l(q)+∑_q=1^log _2ωP_ω^j,l(q)]
.
Further, the ABEP of P_QAM is obtained as
P_QAM=1/JL∑_j=1^J∑_l=1^LP_j,l
.
Substituting (<ref>), (<ref>), (<ref>) and (<ref>) into (<ref>), the system ABER can be expressed as
P_CCIM=2^μ _IP_IM/2 (2^μ _I-1)μ _I+[ 1/2P_IM+(1-P_IM) P_QAM] μ _C/μ _I+μ _C
.
§ SIMULATION RESULTS
In this section, we perform Monte Carlo simulations to evaluate the proposed ISAC system performance and verify analytical results. Unless specified, the main parameters used in the experimental study are set to f_c=10 GHz, d_1=d_2=c/f_c, T=60 μ s, T_W=20 μ s, Δ f=2MHz. σ _C^2=1. The SNR of the sensing receiver and the communication receiver are denoted as 1/σ _1^2 and 1/σ _2^2, respectively.
§.§ Sensing Simulation
In this subsection, the root mean square errors (RMSE) and hit rate are adopted to evaluate the system sensing performance. A hit is proclaimed if the sum of the angle, distance and velocity estimation errors for the three targets in Fig. <ref> is less than 0.2 <cit.>. The RMSE is defined as
RMSE=1/G∑_g=1^G√(1/M∑_m=1^M ( ρ _g-ρ̂_g,m) ^2),
where M stands for the number of Monte Carlo trails. ρ _g denotes the true value of R_g, θ _g or v _g, while ρ̂_g,m representing the estimation of ρ _g in the mth trail.
The frequency offsets design criterion proposed in Section <ref> and widely used linear frequency offsets <cit.> are denoted as 'FODC' and 'LFO' in simulations, respectively. Note that the 'FODC' transmit frequency offsets are set as {{ 0,1,2,3.17,4.2,5.2 }×Δ f }MHz, while the 'LFO' transmit frequency offsets are set as {{0,1,2,3,4,5}×Δ f }MHz. The proposed sensing methods are compared with FDA-MIMO-based frequency offset permutation index modulation (FOPIM) scheme <cit.>.
To evaluate the proposed frequency offsets design criterion, in Fig. <ref>, we compare the target recovery performance of the SSMTE and LCSSE algorithms under FODC and LFO. We set up 3 targets, as { 10.55,40.9m,8.62m/s}, { 10.55,89.6m,20.42m/s}, { 32.01,115.9m,36.5m/s}, respectively. From Fig. <ref> (a) and Fig. <ref> (c), with LFO, one can observe that both SSMTE and LCSSE methods suffer from range ambiguity, where the targets' distances are estimated to other range bins, leading to incorrect parameter estimation and pairing. Fig. <ref> (b) and Fig. <ref> (d) show that the targets can be correctly estimated when using FODC. The reason for this benefit is shown in Fig. <ref>.
Fig. <ref> gives the signal transmit-receive space spectrum with LFO and FODC. When adopting the linear frequency offset scheme, calling back to (<ref>), the range period of the transmit range steering vector is c/2Δ f=75m. Therefore, when estimating a certain target, other targets will form false targets with high peaks in the spatial spectrum, as shown in <ref>(a). This leads to target range estimation errors and target parameters pairing errors. In contrast, the proposed frequency offsets design criterion greatly increases the range period. Hence, targets in other range bins can hardly form peak values in the estimated range bin, reducing the probability of misestimation.
Fig. <ref> compares the hit rates of the proposed methods with FOPIM and MIMO schemes. Note that the frequency offset pool size for the FOPIM scheme is set to N <cit.>. Fig. <ref> indicates that the hit rates of the proposed sensing approaches with FODC are improved by increasing the snapshots number.
This is because more snapshots yield more accurate covariance matrix estimation results, thus improving the parameter recovery performance.
Under the FODC method, the hit rate of the LCSSE method is approximately equal to that of the SSMTE method.
The hit rates remain 0 for SSMTE with LFO, LCSSE with LFO and FOPIM methods, indicating their inability to estimate parameters for multiple targets simultaneously. The phenomenon arises due to the distances of target 1 and target 3 differ by one period, causing range ambiguity and hindering parameter pairing in the FOPIM and LFO-ralated schemes.
In Fig. <ref>, we compare the multi-target RMSE performance among different schemes. We find that the SSMTE and LCSSE methods have the similar range and angle estimation accuracies at middle to high SNRs. At very low SNRs, the LCSSE method is slightly better than the MSSTE method. This is because the angle and range errors of the SSMTE method are coupled. That is, in the low SNR region, the angle and range estimation errors are significant and affect each other. In contrast, in the LCSSE method, the targets range estimation does not affect the angle estimation.
On the other hand, Fig. <ref>(a) shows that the angle estimation error of FOPIM stays around 0.45° after a brief drop, which is much higher than the proposed methods. This is attributed to the fact that the FOPIM method relies only on a simple receiver beamformer to estimate the angle, which has a very low angular resolution. There are two reasons why the FOPIM's range estimation error in <ref>(b) remains high: 1) the large angle estimation error causes a large distance estimation accuracy; 2) the FOPIM method is unable to pair multi-target angles and distances, resulting in mis-paired distance estimates.
Fig. <ref> illustrates RMSEs and root CRBs for the proposed system with different numbers of transmit antennas when FODC is employed. Note that the frequency offsets are set as {{ 0,1,2,3.17,4.2,5.2, 6.2, 7.2 }×Δ f }MHz and {{ 0,1,2,3.17,4.2,5.2, 6.2, 7.2, 8.2, 9.2}×Δ f }MHz when N=8 and N=10, respectively. One can see that the estimation accuracy of target angle, range and velocity improves with the increasing number of transmit antennas. The SSMTE and LCSSE have the similar accuracies with different N.
One can see that the angle and distance estimation performance is close to that of the CRB, but the velocity estimation differs significantly from the CRB. This is because in the proposed methods, the angle and distance estimation errors are substituted into the velocity estimation, which reduces the velocity estimation accuracy, while the velocity CRB is independent of the angle-distance estimation error. Nevertheless, at N=6, SNR=0dB, the velocity RMSE of the proposed method is 0.026m/s, which meets most civil sensing scenarios.
Fig. <ref> compares the computational complexity of the proposed SSMTE and LCSSE algorithms for different N. The complexity of both methods increases with the number of range bins (G') to be estimated. The complexity of the LCSSE is two orders of magnitude lower than that of the SSMTE method, thanks to its conversion of a 2-D angle-range joint search into two 1-D searches. Considering the sensing performance comparisons in Fig. <ref> to Fig. <ref>, we conclude that the LCSSE approach is the wiser choice to sense targets in the proposed system.
§.§ Communication Simulation
In this subsection, we investigate the communication performance of the proposed CCIM scheme. Note that “Ana" and “Sim" represent the BER theoretical upper bound and the Monte Carlo simulation results in the following figures, respectively.
Fig. <ref> compares the BER of the proposed CCIM method with FOPIM <cit.> and traditional MIMO <cit.> methods for varying number of receive antennas. Note that the CCIM method carries 16 bits, and for fairness, the frequency offset pool size for FOPIM is set to 4, and the modulation order for both the FOPIM and MIMO methods is set to 8. From Fig. <ref>, we observe that the system BER is improved with the increasing U, which stems from the fact that the higher receive diversity gain. The simulations of the CCIM method match well with the theoretical results, which verifies the BER analysis. Moreover, MIMO shows the worst BER performance among the three schemes. This is because its transmit symbols are coupled to each other, resulting in a small judgment domain, which deteriorates the BER performance.
Another interesting finding is that the CCIM method outperforms the FOPIM method when the number of receiving antennas is small (U=1,2). However, when U increases, the BER of the proposed CCIM method is worse than FOPIM. This can be explained as follows, referring Eq. (38) in <cit.> gives that the frequency offset permutation estimation error probability (P_perm) of the FOPIM scheme is governed by the frequency offset combination estimation error probability (P_comb). P_comb remains high with low U, resulting in a high overall index bit error probability. As U increases, P_comb decreases dramatically. Furthermore, comparing (42) in <cit.> and (<ref>) in this paper reveals that the judgment domain spacing of the FOPIM method is larger than that of CCIM, which results in a lower BER for FOPIM than for CCIM when larger U.
Fig. <ref> shows the BER comparison results with different number of transmit antennas. Note that the parameter configurations for MIMO and FOPIM methods are the same as for the CCIM scheme. As N increases, the BER performance of the CCIM scheme gradually outperforms that of the FOPIM scheme. Moreover, the BER of the FOPIM and MIMO approaches increase with increasing N, whereas the CCIM's BER remains with increasing N. This is because as N increases, the FOPIM method suffers a higher error probability in estimating frequency offsets. On the other hand, (<ref>) gives that bits carried by every transmit antenna are decoded independently in the CCIM method, with no dependence on N. Therefore, we conclude that CCIM can achieve higher communication rates without loss of BER performance by increasing the number of transmit antennas.
In Fig. <ref>, we study the BER of the CCIM scheme with the different size of the complex coefficient set. It is seen that the BER of the proposed CCIM approach rises as the size of the complex coefficient set becomes larger. For every doubling of J, the BER performance decreases about 4 dB. The reason for this phenomenon can be found in (<ref>), where the index bits misestimation probability P_IM increases with increasing J, leading to a deterioration in the system BER performance.
Fig. <ref> compares the bits per pulse among different ISAC schemes: proposed CCIM, FOPIM <cit.>, FRaC <cit.>, JCRS <cit.>, MAJoRCom <cit.>. In the simulation, the total bandwidth of FOPIM is set equal to that of CCIM, namely, the size of FOPIM's frequency offset pool is set to N. To be fair, we set J=N for CCIM to have the same index resource. JCRS has a waveform set size equal to J. MAJoRCom uses a separate frequency for each antenna, while FRaC activates N-2 antennas. Studying Fig. <ref> finds that FOPIM scheme carries more bits than FRaC, JCRS and MAJoRCom schemes, the reason for this phenomenon has discussed in <cit.>. Moreover, Fig. <ref> depicts that the proposed CCIM method outperforms the FOPIM approach in terms of bits per pulse performance. This observation can be elaborated as follows: in Fig. <ref> the bits per pulse for CCIM and FOPIM are N× (⌊log _2N ⌋ +log _2L) and Nlog _2L+⌊log _2N! ⌋ +⌊log _2C_N^N⌋, respectively. Since N^N>N!, we see that CCIM carries more bits than that FOPIM method.
§ CONCLUSION
This paper investigated the FDA-MIMMO-based ISAC system in a multi-target sensing scenario. Specifically, to improve the communication rate, a CCIM scheme was proposed at the transmitter, which carried the extra bits by selecting complex coefficients from a complex coefficient vector. To estimate targets, the SSMTE method was proposed. By performing 2-D searches of the spatial spectrum over target-containing range bins, the angle and range of targets can be estimated. Then, the targets velocities were estimated by the LS method. However, the 2-D search has high complexity. To address this, we designed the LCSSE method to reduce the complexity by converting the 2-D search into two 1-D searches.
On the other hand, the FDA-MIMO's range steering vector changes periodically with distance, resulting in range ambiguity. To address this issue, the FOCD scheme was proposed, which adjusted the integer and fractional parts of each transmit frequency offset to enlarge the range periodicity, thereby mitigating range ambiguity in multi-target estimation. Besides, the closed-form expressions for CRB, complexity and BER upper bound are derived. Simulation results illustrated that the LCSSE method dramatically reduced the complexity of SSMTE with no degradation in sensing accuracy. Moreover, the proposed FDA-MIMO-based ISAC system outperforms the FOPIM based ISAC system in terms of multi-target sensing performance.
IEEEtran
10
url@samestyle
gu2023integrated
J. Gu, G. Ding, H. Wang, and Y. Xu, “Integrated communications and jamming:
Toward dual-functional wireless networks under antagonistic environment,”
IEEE Communications Magazine, 2023.
ma2020joint
D. Ma, N. Shlezinger, T. Huang, Y. Liu, and Y. C. Eldar, “Joint
radar-communication strategies for autonomous vehicles: Combining two key
automotive technologies,” IEEE signal processing magazine, vol. 37,
no. 4, pp. 85–97, 2020.
liu2022integrated
F. Liu, Y. Cui, C. Masouros, J. Xu, T. X. Han, Y. C. Eldar, and S. Buzzi,
“Integrated sensing and communications: Toward dual-functional wireless
networks for 6G and beyond,” IEEE journal on selected areas in
communications, vol. 40, no. 6, pp. 1728–1767, 2022.
zheng2019radar
L. Zheng, M. Lops, Y. C. Eldar, and X. Wang, “Radar and communication
coexistence: An overview: A review of recent methods,” IEEE Signal
Processing Magazine, vol. 36, no. 5, pp. 85–99, 2019.
cui2021integrating
Y. Cui, F. Liu, X. Jing, and J. Mu, “Integrating sensing and communications
for ubiquitous IoT: Applications, trends, and challenges,” IEEE
Network, vol. 35, no. 5, pp. 158–167, 2021.
zhang2021overview
J. A. Zhang, F. Liu, C. Masouros, R. W. Heath, Z. Feng, L. Zheng, and
A. Petropulu, “An overview of signal processing techniques for joint
communication and radar sensing,” IEEE Journal of Selected Topics in
Signal Processing, vol. 15, no. 6, pp. 1295–1315, 2021.
di2013spatial
M. Di Renzo, H. Haas, A. Ghrayeb, S. Sugiura, and L. Hanzo, “Spatial
modulation for generalized MIMO: Challenges, opportunities, and
implementation,” Proceedings of the IEEE, vol. 102, no. 1, pp.
56–103, 2013.
kumari2017ieee
P. Kumari, J. Choi, N. González-Prelcic, and R. W. Heath, “IEEE 802.11
ad-based radar: An approach to joint vehicular communication-radar system,”
IEEE Transactions on Vehicular Technology, vol. 67, no. 4, pp.
3012–3027, 2017.
liu2018mu
F. Liu, C. Masouros, A. Li, H. Sun, and L. Hanzo, “MU-MIMO communications
with MIMO radar: From co-existence to joint transmission,” IEEE
Transactions on Wireless Communications, vol. 17, no. 4, pp. 2755–2770,
2018.
mealey1963method
R. M. Mealey, “A method for calculating error probabilities in a radar
communication system,” IEEE Transactions on Space Electronics and
Telemetry, vol. 9, no. 2, pp. 37–42, 1963.
levanon1988radar
N. Levanon, “Radar principles,” New York, 1988.
senanayake2022frequency
R. Senanayake, P. J. Smith, T. Han, J. Evans, W. Moran, and R. Evans,
“Frequency permutations for joint radar and communications,” IEEE
Transactions on Wireless Communications, vol. 21, no. 11, pp. 9025–9040,
2022.
hassanien2015dual
A. Hassanien, M. G. Amin, Y. D. Zhang, and F. Ahmad, “Dual-function
radar-communications: Information embedding using sidelobe control and
waveform diversity,” IEEE Transactions on Signal Processing, vol. 64,
no. 8, pp. 2168–2181, 2015.
yu2022integrated
X. Yu, X. Yao, J. Yang, L. Zhang, L. Kong, and G. Cui, “Integrated waveform
design for MIMO radar and communication via spatio-spectral modulation,”
IEEE Transactions on Signal Processing, vol. 70, pp. 2293–2305, 2022.
wang2018dual
X. Wang, A. Hassanien, and M. G. Amin, “Dual-function MIMO radar
communications system design via sparse array optimization,” IEEE
Transactions on Aerospace and Electronic Systems, vol. 55, no. 3, pp.
1213–1226, 2018.
huang2020majorcom
T. Huang, N. Shlezinger, X. Xu, Y. Liu, and Y. C. Eldar, “MAJoRCom: A
dual-function radar communication system using index modulation,” IEEE
transactions on signal processing, vol. 68, pp. 3423–3438, 2020.
ma2021frac
D. Ma, N. Shlezinger, T. Huang, Y. Liu, and Y. C. Eldar, “FRaC: FMCW-based
joint radar-communications system via index modulation,” IEEE journal
of selected topics in signal processing, vol. 15, no. 6, pp. 1348–1364,
2021.
mizui1993vehicle
K. Mizui, M. Uchida, and M. Nakagawa, “Vehicle-to-vehicle communication and
ranging system using spread spectrum technique,” in IEEE 43rd
Vehicular Technology Conference.1em plus 0.5em minus 0.4emIEEE, 1993, pp. 335–338.
sturm2011waveform
C. Sturm and W. Wiesbeck, “Waveform design and signal processing aspects for
fusion of wireless communications and radar sensing,” Proceedings of
the IEEE, vol. 99, no. 7, pp. 1236–1259, 2011.
xu2023bandwidth
Z. Xu and A. Petropulu, “A bandwidth efficient dual-function radar
communication system based on a mimo radar using OFDM waveforms,”
IEEE Transactions on Signal Processing, vol. 71, pp. 401–416, 2023.
keskin2021mimo
M. F. Keskin, H. Wymeersch, and V. Koivunen, “MIMO-OFDM joint
radar-communications: Is ICI friend or foe?” IEEE Journal of
Selected Topics in Signal Processing, vol. 15, no. 6, pp. 1393–1408, 2021.
temiz2021optimized
M. Temiz, E. Alsusa, and M. W. Baidas, “Optimized precoders for massive MIMO
OFDM dual radar-communication systems,” IEEE Transactions on
Communications, vol. 69, no. 7, pp. 4781–4794, 2021.
johnston2022mimo
J. Johnston, L. Venturino, E. Grossi, M. Lops, and X. Wang, “MIMO OFDM
dual-function radar-communication under error rate and beampattern
constraints,” IEEE Journal on Selected Areas in Communications,
vol. 40, no. 6, pp. 1951–1964, 2022.
temiz2021dual
M. Temiz, E. Alsusa, and M. W. Baidas, “A dual-function massive MIMO uplink
OFDM communication and radar architecture,” IEEE Transactions on
Cognitive Communications and Networking, vol. 8, no. 2, pp. 750–762, 2021.
liu2018toward
F. Liu, L. Zhou, C. Masouros, A. Li, W. Luo, and A. Petropulu, “Toward
dual-functional radar-communication systems: Optimal waveform design,”
IEEE Transactions on Signal Processing, vol. 66, no. 16, pp.
4264–4279, 2018.
liu2022transmit
X. Liu, T. Huang, and Y. Liu, “Transmit design for joint MIMO radar and
multiuser communications with transmit covariance constraint,” IEEE
Journal on Selected Areas in Communications, vol. 40, no. 6, pp. 1932–1950,
2022.
liu2021cramer
F. Liu, Y.-F. Liu, A. Li, C. Masouros, and Y. C. Eldar, “Cramér-rao bound
optimization for joint radar-communication beamforming,” IEEE
Transactions on Signal Processing, vol. 70, pp. 240–253, 2021.
sammartino2013frequency
P. F. Sammartino, C. J. Baker, and H. D. Griffiths, “Frequency diverse MIMO
techniques for radar,” IEEE Transactions on Aerospace and Electronic
Systems, vol. 49, no. 1, pp. 201–222, 2013.
huang2022adaptive
B. Huang, J. Jian, A. Basit, R. Gui, and W.-Q. Wang, “Adaptive distributed
target detection for FDA-MIMO radar in gaussian clutter without training
data,” IEEE Transactions on Aerospace and Electronic Systems,
vol. 58, no. 4, pp. 2961–2972, 2022.
jia2023optimal
W. Jia, A. Jakobsson, and W.-Q. Wang, “Optimal frequency offset selection for
FDA-MIMO beampattern design in the range-angle plane,” IEEE Signal
Processing Letters, 2023.
lan2021single
L. Lan, M. Rosamilia, A. Aubry, A. De Maio, and G. Liao, “Single-snapshot
angle and incremental range estimation for FDA-MIMO radar,” IEEE
Transactions on Aerospace and Electronic Systems, vol. 57, no. 6, pp.
3705–3718, 2021.
xu2015joint
J. Xu, G. Liao, S. Zhu, L. Huang, and H. C. So, “Joint range and angle
estimation using MIMO radar with frequency diverse array,” IEEE
Transactions on Signal Processing, vol. 63, no. 13, pp. 3396–3410, 2015.
lan2020glrt
L. Lan, A. Marino, A. Aubry, A. De Maio, G. Liao, J. Xu, and Y. Zhang,
“GLRT-based adaptive target detection in FDA-MIMO radar,” IEEE
Transactions on Aerospace and Electronic Systems, vol. 57, no. 1, pp.
597–613, 2020.
gui2020low
R. Gui, W.-Q. Wang, and Z. Zheng, “Low-complexity GLRT for FDA radar
without training data,” Digital Signal Processing, vol. 107, p.
102861, 2020.
sun2024space
Y. Sun, W.-Q. Wang, and C. Jiang, “Space–time-range clutter suppression via
tensor-based STAP for airborne FDA-MIMO radar,” Signal
Processing, vol. 214, p. 109235, 2024.
lan2020suppression
L. Lan, J. Xu, G. Liao, Y. Zhang, F. Fioranelli, and H. C. So, “Suppression of
mainbeam deceptive jammer with FDA-MIMO radar,” IEEE Transactions on
Vehicular Technology, vol. 69, no. 10, pp. 11 584–11 598, 2020.
xu2015deceptive
J. Xu, G. Liao, S. Zhu, and H. C. So, “Deceptive jamming suppression with
frequency diverse MIMO radar,” Signal Processing, vol. 113, pp.
9–17, 2015.
cheng2021physical
Q. Cheng, S. Wang, V. Fusco, F. Wang, J. Zhu, and C. Gu, “Physical-layer
security for frequency diverse array-based directional modulation in
fluctuating two-ray fading channels,” IEEE Transactions on Wireless
Communications, vol. 20, no. 7, pp. 4190–4204, 2021.
jian2023physical
J. Jian, W.-Q. Wang, A. Basit, and B. Huang, “Physical layer security for
frequency diverse array-based dual-hop spatial modulation,” IEEE
Transactions on Wireless Communications, 2023.
qiu2020multi
B. Qiu, L. Wang, J. Xie, Z. Zhang, Y. Wang, and M. Tao, “Multi-beam index
modulation with cooperative legitimate users schemes based on frequency
diverse array,” IEEE Transactions on Vehicular Technology, vol. 69,
no. 10, pp. 11 028–11 041, 2020.
jian2023mimo
J. Jian, W.-Q. Wang, B. Huang, L. Zhang, M. A. Imran, and Q. Huang,
“MIMO-FDA communications with frequency offsets index modulation,”
IEEE Transactions on Wireless Communications, 2023.
nusenu2020space
S. Y. Nusenu, S. Huaizong, Y. Pan, and A. Basit, “Space-frequency increment
index modulation approach for fifth generation and beyond wireless
communication systems,” IEEE Transactions on Vehicular Technology,
vol. 69, no. 6, pp. 6286–6298, 2020.
nusenu2018time
S. Y. Nusenu, W.-Q. Wang, and A. Basit, “Time-modulated FD-MIMO array for
integrated radar and communication systems,” IEEE Antennas and
Wireless Propagation Letters, vol. 17, no. 6, pp. 1015–1019, 2018.
wu2023waveform
H. Wu, B. Jin, Z. Xu, X. Zhu, Z. Zhang, and Z. Lian, “Waveform design and
signal processing for integrated radar-communication system based on
frequency diversity array,” Digital Signal Processing, vol. 133, p.
103839, 2023.
zhou2021performance
X. Zhou, L. Tang, Y. Bai, and Y.-C. Liang, “Performance analysis and waveform
optimization of integrated FD-MIMO radar-communication systems,”
IEEE Transactions on Wireless Communications, vol. 20, no. 11, pp.
7490–7502, 2021.
li2023joint
M. Li and W.-Q. Wang, “Joint radar-communication system design based on
FDA-MIMO via frequency index modulation,” IEEE Access, 2023.
jian2023fda
J. Jian, Q. Huang, B. Huang, and W.-Q. Wang, “FDA-MIMO-based integrated
sensing and communication system with frequency offset permutation index
modulation,” arXiv preprint arXiv:2312.14468, 2023.
gui2018general
R. Gui, W.-Q. Wang, and H. Shao, “General receiver design for FDA radar,”
in 2018 IEEE Radar Conference (RadarConf18).1em plus 0.5em
minus 0.4emIEEE, 2018, pp. 0280–0285.
richards2005fundamentals
M. A. Richards et al., Fundamentals of radar signal
processing.1em plus 0.5em minus 0.4emMcgraw-hill New York,
2005, vol. 1.
kay1993fundamentals
S. M. Kay, Fundamentals of statistical signal processing: estimation
theory.1em plus 0.5em minus 0.4emPrentice-Hall, Inc., 1993.
simon2001digital
M. K. Simon and M.-S. Alouini, Digital communication over fading
channels.1em plus 0.5em minus 0.4emNew York: Wiley, 2001.
li2019spatial
Q. Li, M. Wen, E. Basar, H. V. Poor, and F. Chen, “Spatial modulation-aided
cooperative NOMA: Performance analysis and comparative study,” IEEE
Journal of Selected Topics in Signal Processing, vol. 13, no. 3, pp.
715–728, 2019.
biglieri2007mimo
E. Biglieri, R. Calderbank, A. Constantinides, A. Goldsmith, A. Paulraj, and
H. V. Poor, MIMO wireless communications.1em plus 0.5em minus
0.4emCambridge university press, 2007.
|
http://arxiv.org/abs/2409.02661v1 | 20240904123916 | Accretion vs. Core-Filament Collision: Implications for Streamer Formation in Per-emb-2 | [
"Fumitaka Nakamura",
"Quang Nguyen-Luong",
"Kousuke Ishihara",
"Aoto Yoshino"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.SR"
] |
Implications for Streamer Formation in Per-emb-2
National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
Department of Astronomical Science, SOKENDAI (The Graduate University for Advanced Studies), 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
Department of Astronomy, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Astrophysical Data Sciences, CSMES, the American University of Paris, PL111, 2 bis, passage Landrieu, 75007 Paris, France
Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France
[email protected]
Recent millimeter and submillimeter observations have unveiled
elongated and asymmetric structures around protostars. These structures, referred to as streamers, often exhibit coherent velocity gradients, seemingly indicating a directed gas flow towards the protostars. However, their origin and role in star formation remain uncertain.
A protostellar core Per-emb-2, located in Barnard 1, has a relatively large streamer with 10^4 au, which is more prominent in emission from carbon chain molecules. We aim to unveil the formation mechanism of this streamer.
We conducted mapping observations towards Per-emb-2 using the Nobeyama 45-m telescope. We targeted carbon chain molecular lines such as CCS, HC_3N, and HC_5N at 45 GHz.
Using , we identified one protostellar and four starless cores, including three new detections, on the Herschel column density map. The starless and protostellar cores are more or less gravitationally bound.
We discovered strong CCS and HC_3N emissions extending from the north to the south, appearing to bridge the gap between the protostellar core and the starless core north of it. This bridge spans 3× 10^4 au with the velocities from 6.5 to 7.0 km s^-1.
The bridge has the velocity gradient opposite to the streamer.
Thus, the streamer is unlikely to be connected to this bridge, suggesting that the streamer does not have an accretion origin.
We propose that a collision between a spherical core and the filament has shaped the density structure in this region, consequently triggering star formation within the head-tail-shaped core.
In this core-filament collision (CFC) scenario, the collision appears to have fragmented the filament into two structures.
The streamer is a bow structure, while the bridge is a remnant of the shock-compressed filament.
Thus, we conclude that the Per-emb-2 streamer does not significantly contribute to the mass accumulation towards the protostar.
Accretion vs. Core-Filament Collision
Fumitaka Nakamura1,2,3,
Quang Nguyen-Luong4,5,
Kousuke Ishihara1,2,
Aoto Yoshino1,3
September 9, 2024
======================================================================================================================
Accretion vs Core Filament Collision
Nakamura, F., Nguyen-Luong, Q., Ishihara, K., & Yoshino, A.
§ INTRODUCTION
Stars form from dense cores in molecular clouds.
According to the standard scenario of star formation <cit.>, a nearly spherical dense core undergoes gravitational contraction, forming a nearly axi-symmetric system consisting of a protostar and a rotationally-supported disk.
However, recent observations have uncovered significant non-axisymmetric structures surrounding young protostellar systems, commonly referred to as streamers or spirals <cit.>.
Hereafter, we will use the term "streamer" to refer to such structures for the sake of simplicity. These structures often exhibit coherent velocity patterns, interpreted as infall towards the central protostar <cit.>. Understanding the role of streamers in mass loading towards central stars is crucial for unveiling the complexities of stellar mass accumulation process.
Numerical studies have proposed various mechanisms for streamer formation. For example, in many numerical simulations of cluster formation, the intermediate-density structure connected with dense cores and sink particles, which resemble the streamers, are often observed and these structures partly play a role in mass accretion onto the central protostellar system <cit.>.
On the other hand, <cit.> demonstrated that streamer-like structures could arise from local turbulent compression within magnetized parent cores, without the need for external accretion of ambient gas. In this case, the protostellar system does not have to have structures connected with the inter-core medium,
and streamers can be generated through the accretion process within the natal core.
Therefore, the accretion of ambient gas is likely to be a minor process for supplying the gas, and the stellar mass is roughly determined by the parent core mass.
<cit.> proposed that streamers could form through dense core collisions <cit.>. They estimated the frequencies of DCC in nearby star-forming molecular clouds and found that the typical dense core experiences at least a collision in its lifetime, particularly in clustered environments.
In this DCC model, the compressed layer created between two cores evolves into streamers and spirals, creating the rotating envelope around the protostellar system. In this case,
the streamer contributes to supply additional mass into the central protostar, but the mass accretion occurs temporarily, depending on the conditions such as the impact parameter and collision speed.
However, these mechanisms await further elucidation based on observational evidence.
To shed light on the streamer formation, we conducted observations towards Per-emb-2, a core with a streamer towards the central protostar <cit.>.
It is located in the southern part of Barnard 1 in the Perseus molecular cloud at a distance of 300 pc from the Sun <cit.>.
Gas within the streamer appears to exhibit a velocity gradient towards the central protostar [from 7.5 km s^-1 (far side) to 7.0 km s^-1 (protostar)], and the streamer is particularly prominent in emissions from chemically early-phase molecules like CCS and HC_3N as observed with the NOEMA interferometer.
Since these molecules steeply decrease with time at the high-density environment,
<cit.> claim that
the gas of the streamer is supplied from outside via accretion.
Very recently, <cit.> also claimed that the ambient gas may be provided from the northern part of the protostar.
We present the detailed information on observations in Sect. <ref>.
Section <ref> presents the analysis of the velocity and spatial distributions of molecular gas obtained from observations.
In Sect. <ref>, we propose our core-filament collision (CFC) model that explains the structure in the observed region.
§ OBSERVATIONS AND DATA
We carried out the on-the-fly (OTF) mapping observations towards the protostellar core Per-emb-2 using the Z45 receiver of the NRO 45m telescope <cit.> in the 2022–2023 season. Some details of the observations are presented in the Appendix.
The mapped area is determined such that the protostar is located at the centre of the 5' × 5' box
(Fig. <ref>a).
The target lines are CCS (J_N=4_3-3_2),
HC_3N (J=5-4), and HC_5N (J=17-16). The parameters are summarised in Table <ref>.
The telescope has a full-width at half-maximum (FWHM) beam size of 38 at 43 GHz.
The standard chopper wheel method was used to convert the output signal into the antenna temperatures (T_A^*)
and corrected for the atmospheric attenuation.
The main beam efficiency of Z45 is η_ Z45≃ 0.73 at 45 GHz.
The main-beam temperature was calculated as T_ mb = T_A^*/η_ Z45.
We adopted a spheroidal function as a gridding convolution function to calculate the intensity at each grid point of the final cube data with a spatial grid size of 15 and a frequency resolution of 3.81 kHz which corresponds to ∼ 0.025 km s^-1 at 45 GHz.
The final effective angular resolution of the map was 49, corresponding to 15000 au, or 0.07 pc at the distance of 300 pc
<cit.>.
We downloaded the high resolution H_2 column density and dust temperature maps obtained by the Herschel Gould Belt Survey <cit.>.
The angular resolution of the maps are 18.
§ RESULTS AND ANALYSIS
§.§ Dense core identification and their dynamical states
In Fig. <ref>a, we present the Herschel column density map of the observed area.
We applied <cit.> to this image and identified five cores, four cores fully covered by our observation box (see the Appendix <ref> for more details). The boundaries of the identified cores are outlined in red. Their physical quantities are summarised in Table <ref>.
In the same area, <cit.> identified three cores using getsources: a protostellar core (HGBS 447) and two starless cores (HGBSs 442, 449).
Our core Nos. 1 and 3 correspond to HGBS 442 and 447, respectively. The cores Nos. 2, 4 and 5 are starless and newly detected. We could not identify HGBS 449, which has the lowest column density.
The central protostellar core (No. 3) has a mass of ∼ 10 M_⊙. According to <cit.>, the protostellar core has a systemic velocity of ∼ 7.0 km s^-1 and has a streamer with a length of 10^4 au <cit.>.
From the CCS cube data, we derived the core's CCS velocity dispersion (σ_ CCS)
and then converted it to the intrinsic velocity dispersion (σ_ tot) of gas with a mean molecular weight of 2.33 m_H to assess their dynamical state.
See the Appendix <ref> for the details of the analysis.
The cores Nos. 1, 2, 3, and 5 have α_ vir∼ 1, where the virial parameter α _ vir was computed under the assumption of centrally-condensed (∝ r^-2) sphere. The protostellar core has a significantly-small Bonnor-Ebert ratio with α_ BE∼ 0.18, but α_ vir is close to unity,
where the Bonnor-Ebert ratio is the ratio between the Bonnor-Ebert critical mass of the core and the observed mass.
The core No. 4 does not have significant CCS emission, and we could not measure the velocity dispersion, but the Bonnor-Ebert ratio of the core No. 4 (α_ BE∼ 1.3) suggests it is close to bound.
We also calculate the external pressures exerted on the core surfaces, and found that they play a minor role except for No. 5, where we calculated the core's external pressure (3 ρ_ ambσ_ tot, amb^2) from the ambient density ρ_ amb and ambient gas intrinsic velocity dispersion σ_ tot, amb measured in the branch just below the corresponding core. The branch is an intermediate structure identified by .
The core No.5 is dynamically compressed by the ambient pressure and
may be hard to create from spontaneous gravitational fragmentation for which cores become self-gravitating.
The other cores are close to be in dynamical equilibrium.
For the protostellar core, the column density distribution is not symmetric, but exhibiting a head-tail shape. The centre of gravity determined from this distribution exhibits a significant offset from the position of the protostar itself (see Fig. <ref>).
This suggests that the streamer does not point directly towards the gravity centre of the protostellar core.
If its curved structure is generated by the angular momentum of the infalling gas, a large (∼ 10^3 au) rotating envelope should form around the protostar and connect to the streamers.
However, there is currently no observational evidence for such a large structure <cit.>.
Therefore, it is unlikely that the streamer is driven by gravitational infall from outside the core.
§.§ CCS/HC_3N bridge with a length of 3× 10^4 au (0.2 pc)
In Fig. <ref>, we present the velocity integrated maps of CCS, HC_3N, and HC_5N alongside with the Herschel column density map.
In the CCS integrated intensity map, we recognise the two prominent CCS elongated features: an elongated feature, stretching from north to south,
bridging the gap between the starless cores Nos. 1&2 and the protostellar core No. 3, while another compact emission is positioned to the west of the protostellar core.
The distribution of HC_3N emission appears to resemble that of CCS, with strong concentrations at the southern periphery of the starless cores Nos. 1&2. Another significant peak is evident immediately above the protostar,
with a fainter peak located to the west, coinciding with a local maximum in CCS intensity.
HC_5N, exhibiting the weakest intensity among the three lines, has a peak at a similar position of the second peak of HC_3N, or the northern portion of the protostellar core.
Figs. <ref> & <ref> display the velocity channel maps of CCS, HC_5N, and HC_3N. The starless cores Nos. 1&2 appear associated with relatively strong compact emission around a velocity of 6.5 km s^-1.
The CCS emission within the velocity range of 6.5–6.7 km s^-1 exhibits a fragmented filamentary structure, extending along a line from the northeast to the southwest which coincides with the large-scale filament seen in the H_2 column density image
(see Fig. <ref>). An elongated structure is discernible in the southeast corner of the CCS map, though the peak is faint in the H_2 map (see Fig. <ref>).
At a velocity of 6.6–7.0 km s^-1, the elongated structure observed in CCS gradually connects with the central protostellar core, forming a bridge. A similar bridging structure is also observed in the corresponding velocity channel for HC_5N.
Overall, these emissions demonstrate a weak velocity gradient spanning from 6.5 km s^-1 to 7.0 km s^-1 towards the protostar.
§.§ Mismatch in velocity structure between bridge and streamer
Based on the observations detailed in the previous section, it becomes evident that the protostellar core No. 3 and the northern starless cores Nos.1&2 exhibit a connection characterized by a bridge structure.
In Fig. <ref>, we present the CCS position-velocity plot tracing the 0.2 pc bridge structure.
The protostellar core No. 3 exhibits a velocity of ∼ 7 km s^-1, indicating a velocity disparity between the two cores, bridged by an intermediate velocity region.
The velocity gradient along the bridge is estimated to be
≈ 0.5 km s^-1/0.2 pc ∼ 2.5 km s^-1 pc^-1.
The existence of the intermediate-velocity gas bridging the two structures with different velocities suggests a dynamical interaction between the cores, a similar feature to the larger-scale cloud-cloud collisions <cit.>. The enhanced CCS emission observed could potentially stem from the colliding region, where CCS formation may be accelerated due to increased density. Both CCS and HC_3N exhibit critical densities around 10^4 cm^-3, implying that while the shocked layer may possess larger density, the column density might not be sufficiently high to be recognised in the 2D Herschel image.
The streamer seemingly follows the ∼ 10^22 cm^-2 contour of the protostellar core and
is spatially overlapped with the distribution of the CCS emission (see Fig. <ref>), but the velocities are very different.
The streamer has much steeper velocity gradient in the opposite direction to that of the bridge (see the right panel of Fig. <ref>).
Therefore, we conclude that the bridge is not a structure connected to the streamer.
§ DISCUSSION
§.§ Accreting streamer or not?
<cit.> claimed that the streamer connects to the northern region through the bridge based on the integrated intensity map. They suggested that the streamer is accreting flow. However,
as mentioned above, the gas in the bridge is likely to be different component from the streamer since they have different velocities: the bridge has the velocities from 6.5 (north) to 7.0 km s^-1 (south) with
the velocity gradient of 2.5 km s^-1 pc^-1 (hereafter, we define this direction is positive), whereas the streamer has the velocities from 7.5 (north) to 7.0 km s^-1 (south) with
the velocity gradient of –0.5 km s^-1/0.05 pc = –10 km s^-1 pc^-1.
There is no evidence showing that the streamer connects to the bridge in the velocity distribution.
In addition, the centre of gravity of the protostellar core has a significant offset from the protostar of ∼ 12(∼ 7000 au). If the streamer originates from the inflow from the ambient gas driven by the gravity, it is likely to point towards the core's gravity centre (see Fig. <ref>).
Or, if the offset is due to rotation, this object must have a large rotating envelope with a size of ∼ 10^3 au. Such a structure is not detected towards the protostar <cit.>. This fact also implies that the streamer is not accretion origin, contrasting with the interpretation of <cit.>.
The northern cores (1&2) are close to gravitationally bound and therefore there is a possibility for them to evolve into protostars. Their merging might trigger star formation.
The protostellar core also contains appreciable mass, a part of which may eventually accrete onto the central protostar before the ambient gas falls in.
<cit.> proposed the DCC scenario to reproduce the density structure in Per-emb-2 in numerical simulations. According to this model, when two cores collide with each other, the shock compressed layer is formed in between, connected to the protostar. The gas is inflowing towards the protostar. However,
if this is the case, both the bridge and streamer should have the velocity gradients with the same directions <cit.>.
The velocity/density structure appears more complicated than what the DCC model predicts.
Therefore, the formation mechanism of the streamer is neither ambient gas inflow nor two round core collision.
§.§ A core-filament collision (CFC) model
From the evidence of the existence of the large scale filament, the positive gradient towards the streamer of the protostar. We propose a core-filament collision (CFC) model to explain for the configuration of Per-emb-2, Fig. <ref> depicts the schematic representation of this CFC model.
We propose that (1) a spherical core, with a relative speed of ∼ 1–1.5 km s^-1, approached from the west side of the filament and collided with it. (2) The collision initiated star formation on the far side of the eastern extremity of the core, where the initial shock compression occurred, leading to the formation of a protostellar system.
The arc-like streamer may be the bow structure formed by the CFC, and therefore the velocity gradient of the streamer is not due to the gravitational accretion.
If its dynamics is not significantly affected by the stellar gravity, the flow is pointed towards outside along it (outflow), instead of infall.
The bow typically has an almost linear velocity structure along it because of its curvature. Such a feature is consistent with the observation.
When the stellar gravity is sufficiently strong to attract the gas towards the star, the flow may be pointing towards the protostar.
It would be difficult to judge which direction (outflow or inflow) is reasonable from the current observational data.
Future numerical experiments would be needed to verify this scenario more quantitatively.
The head-tail morphology observed in the No. 3 core agrees well with the anticipated appearance resulting from this collision.
The CCS arc (streamer) structure <cit.> appears to well trace the edge of the protostellar core ( ∼ 1-2× 10^22 cm^-2, see Fig. <ref>).
Furthermore, the collision led to the division of the filament into two distinct structures (Nos. 1&2 and 5).
The shock-compressed layers in the northern and southern regions attained densities of ∼ 10^4 cm^-3, resulting in relatively strong CCS/HC_3N emissions.
The northern shock-compressed layer became interconnected with the central protostellar system. There is a possibility that it could evolve into a accreting streamer in the future. However, the total accretion mass via the bridge (future streamer) may be small. The protostellar core envelope still contains enough material (∼ 10 M_⊙).
Therefore, the accretion along the streamers, if it exits, plays only a minor role in the mass accumulation towards the central protostar.
The total velocity difference is about 1 (= 7.5–6.5)
km s^-1 and the filament width is about 0.1 pc, therefore it takes 0.1 pc / 1 km s^-1 ∼ 10^5 yr for the core to cross the filament.
This is comparable to the typical lifetime of Class 0 protostar and
consistent with the triggered star formation scenario proposed by CFC.
Furthermore, if the bridge is the remnant of the shock compressed layer, its density increased from n_ amb∼ 10^3 cm^-3 to ∼ 10^4 cm^-3 by the isothermal shock with a Mach number of 1.5 km s^-1 / sound speed ∼ 6.
This condition in the shocked area, or streamer, is quite similar to the condition assumed by <cit.> who started the chemical evolution calculation assuming a steady state with a uniform isothermal gas of 10^4 cm^-3 and T=10 K.
According to Fig. 13 of <cit.>,
the CCS/HC_3N becomes most abundant in ∼ 10^5 yr in such a density range, and declines steeply after 0.5 Myr <cit.>.
These considerations on the timescales well match the CFC scenario.
In our CFC or DCC scenarios, shocks are likely to somewhat increase gas temperature although the molecular gas is highly radiative and therefore the gas temperature is kept low even after shock compression.
In such a situation, the streamers formed by the collision model may have emission of SO or CH_3OH whose sublimation temperatures are about 50 and 80 K, respectively.
These lines could exhibit broader widths as a result of shock interactions. The above consideration should be validated through future chemical evolution calculations using e.g., the Paris-Durham shock code <cit.>.
This work was financially supported by JSPS KAKENHI Grant Numbers JP23H01218 (F.N.).
Part of this work was supported by the NAOJ Visiting Research Grant (Q.N.L.).
We thank the NRO staff for both operating the 45 m and helping us with the data reduction.
We thank the anonymous referee for valuable comments that helped improve the paper.
aa
§ OBSERVATIONS AND TARGET LINES
The OTF observations were carried out in October 2022 (2 nights) and in March 2023 (1 daytime). Z45 is dual linear polarization receiver and we summed up the horizontal and vertical polarization components to improve the signal-to-noise ratios.
The scan interval of the OTF observations was set to 8, about a fifth of the beam size. The pointing observations were made every one hour with SiO maser lines of NML-Tau (KL Tau). The pointing errors were within 3– 5.
We used SAM45 digital spectrometer as a backend. SAM45 is a highly flexible FX-type digital spectrometer with a finest frequency resolution of 3.81 kHz. The bandwidth of SAM45 with 3.81 kHz was 15.625 MHz.
It took about 30 min to obtain a single map. We summed 6 maps to obtain a final map.
The total observation time was about 6 hours including the overhead of pointing observations. The on-source time was 3 hours.
During the observations, the typical system noise temperatures were about 150–200 K.
The rest frequencies and transitions of the target lines are summarised in Table <ref>. See <cit.> for the rest frequency of CCS.
§ CORE IDENTIFICATION AND PHYSICAL QUANTITIES
§.§ Core identification
We applied <cit.> to the high-resolution Herschel column density map with an angular resolution of 18.
The has three parameters for the hierarchical structure analysis: which is the threshold value, which is the minimum step for structure identification, and which is the minimum pixel number.
We adopt = 3σ, =1.5 σ, and = Beam size/pixel size, σ (= 6.81 × 10^20 cm^-2) is the rms noise level.
The identifies three structures: a leaf, branch, and trunk.
The minimum structure of “a leaf” is defined as a dense core.
To estimate the ambient gas pressure around the cores, we adopted the physical quantities averaged in a branch immediately below the corresponding leaf.
We also derive the core mass by subtracting the background component which is defined as the trunk.
§.§ Dynamical states of the cores
We assess the dynamical states of the cores applying the virial theorem.
The virial equation for a spherical core is given by
1/2∂ ^2 I/∂ t^2
= U+ W+ S ,
where I, U, W, and S are the moment of inertia, internal kinetic energy, gravitational energy, and surface pressure term, respectively, and the magnetic field terms are neglected <cit.>.
These individual terms are given as
U = 3 M_ coreσ_ tot^2
W = -3/5a GM_ core^2/R_ core
S = -4 π R^3_ core P_ ex
where G, M_ core, R_ core, and P_ ex are the gravitational constant, core mass, core radius, and the external pressure, respectively.
a is a dimensionless parameter of order unity which measures the effects of a non-uniform or nonspherical mass distribution
<cit.>. For a uniform sphere and a centrally condensed sphere with ρ∝ r^-2, a = 1 and 5/3, respectively. Here, we adopt a=5/3 since the cores tend to be more or less centrally-condensed.
σ_ tot is the 1D intrinsic velocity dispersion of the molecule of mean mass,
σ_ tot = √(σ_ CCS ^2+ k_B
T ( 1/μ m_H - 1/m_ CCS)
)
and G is the gravitational constant, k_B is the Boltzman constant, σ_ CCS is the velocity dispersion measured by CCS, μ (=2.33) is the mean molecular weight, m_H is the mass of a hydrogen atom, and m_ CCS is the mass of a CCS molecule.
The CCS velocity dispersions towards the cores are computed by the Gaussian fitting of the averaged spectra
(see Fig. <ref>).
It is worth noting that CCS emission may preferentially trace the outer envelopes of cores, given its critical density of 10^4-10^5 cm^-3. In contrast, the core densities calculated from the Herschel image range from 3× 10^6 to
2× 10^4 cm^-3 as shown in Table <ref>. Despite this, in the core analysis presented here, we adopt the CCS line widths as representative of the cores' properties.
The external pressure is calculated as
P_ ex = 3 ρ_ ambσ_ tot,amb^2
where ρ_ amb and σ_ tot, amb are the density and 1D intrinsic velocity dispersion of ambient gas, respectively, and they are derived from the values averaged in the branch immediately below the corresponding leaf.
The ambient density was computed under the assumption that the branch is spherical symmetry.
The core virial parameter is calculated as
α_ vir =U/W =3σ_ tot^2R_ core/GM_ core ,
Table <ref> summarises the velocity dispersions of the cores
and their energy ratios.
Fig. <ref> shows the energy ratios W/U vs. S/U, where W, U, and S are the gravity term, internal kinetic and thermal energy term, and the surface term, respectively.
In this plot, the cores below the black dashed line are satisfied with the condition S+W+U < 0, for which the cores collapse.
The core's self-gravity is important for the area above the dotted line (|S/U| < |W/U|), whereas the surface pressure is dominant in the area below the dotted line.
The cores 1, 2, and 3 are located in the gravity-dominant area. They are close to the dynamical equilibrium and self-gravitating.
For the core 5, the external pressure plays a crucial role in the core's dynamical stability, and thus the core is dynamically unstable due to the external pressure. Such a core is difficult to create from the gravitational fragmentation of the filament. The formation of unstable non-self-gravitating core (core No. 5) appears to agree with the collision scenario.
§ CHANNEL MAPS
Fig. <ref> shows the channel maps of HC_3N (J=5-4) and HC_5N (J=17-16). The HC_3N has a stronger emission but having several overlapped hyperfine components. However, adopting the rest frequency of J=5‒4, F=5‒4 as a centre frequency, we simply integrated all the emission to construct the channel maps.
The bridge is prominent for HC_3N (J=5-4), and structure similar to CCS was detected.
For HC_5N (J=17-16), the strong emission is seen just at the southern edge of cores No. 1 and 2.
§ CENTRE OF GRAVITY OF THE PROTOSTELLAR CORE W.R.T THE LARGE FILAMENT
The circle in Fig. <ref> indicates the position of the protostar. The black square indicates the
column density weighted position of the protostellar core. If the gas were distributed only on the plane of sky, the position would coincide with the centre of gravity. In other words, the protostar was not formed at the core centre but near the core envelope.
This implies that the core formation did not occur spontaneously.
If the curved shape of the streamer identified by <cit.> is due to the angular momentum of infalling gas, the rotational motion must be significant on the scale of 10^3 au, leading to the formation of a large (∼ 10^3 au) rotating envelope or disk. However, such a large circumstellar structure was not detected in the observations by <cit.>. Thus, at this scale, the rotational motion would still be minor and unlikely to create the elongated, curved structure seen in Fig. <ref>. For example, <cit.>'s core collision simulation showed that the streamer formed by the infalling gas from the second core is smoothly connected to a central, rotating disk-like envelope. Such a structure is a plausible outcome of the infalling gas scenario. Since the rotating disk is small for this protostar, the curved structure, if formed by the infalling gas, must almost point directly to the protostar.
In Fig. <ref> we have also indicated the position of the filament shown in Fig. <ref>.
|
http://arxiv.org/abs/2409.02279v1 | 20240903202551 | Structure of odd-mass Ne, Na, and Mg nuclei | [
"Z. H. Sun",
"T. R. Djärv",
"G. Hagen",
"G. R. Jansen",
"T. Papenbrock"
] | nucl-th | [
"nucl-th"
] |
Physics Division, Oak Ridge National Laboratory, Oak
Ridge, Tennessee 37831, USA
Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA
National Center for Computational Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
Physics Division, Oak Ridge National Laboratory, Oak
Ridge, Tennessee 37831, USA
Physics Division, Oak Ridge National Laboratory, Oak
Ridge, Tennessee 37831, USA
Department of Physics and Astronomy, University of
Tennessee, Knoxville, Tennessee 37996, USA
National Center for Computational Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
Physics Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA
Department of Physics and Astronomy, University of
Tennessee, Knoxville, Tennessee 37996, USA
Physics Division, Oak Ridge National Laboratory, Oak
Ridge, Tennessee 37831, USA
§ ABSTRACT
The island of inversion is a region of neutron-rich nuclei that are deformed in their ground states. In this region, less is known about the energy levels of odd-mass nuclei, how they evolve with increasing neutron numbers, and how they can be organized into rotational bands. We perform ab initio coupled-cluster calculations of spectra in odd-mass Ne, Na, and Mg nuclei based on an interaction of chiral effective field theory.
Our results confirm some tentative spin and parity assignments, predict the structure of nuclei near the neutron dripline, and inform us about rotational bands in this region of the nuclear table.
Structure of odd-mass Ne, Na, and Mg nuclei
T. Papenbrock
September 9, 2024
===========================================
§ INTRODUCTION
Neutron-rich nuclei at and beyond the “magic” neutron number N=20 are deformed <cit.>; this region is known as the “island of inversion” <cit.>. Since the discovery of this region, considerable knowledge and understanding has been gained about the even-even nuclei within it <cit.>, see, e.g., Ref. <cit.> for a recent review. However, a search in the NuDat <cit.> database of nuclear data reveals that much less is understood about odd-mass neon and magnesium nuclei. We do not even know how to sort the measured levels into rotational bands. This is a particular gap in our understanding because such results would tell us how shell structure evolves as neutrons are added.
Accurately describing these nuclei from the fundamental interaction has been challenging due to the interplay of deformation and continuum effects. The evolution and competition of sd and pf shell physics play a key role in determining the ground state spin of the odd-mass nuclei at N=20. The recent observation of ^28O <cit.>
suggests that this nucleus is not doubly magic and that the island of inversion may extend beyond the two-neutron halo ^29F <cit.> into the oxygen isotopes. The experimental study of ^31Ne <cit.>
also suggested that the p-wave continuum creates a halo in this nucleus, i.e. the unpaired neutron occupies the p_3/2 instead of the f_7/2 orbital (which is lower in energy in the conventional shell model).
In this work, we present ab initio computations of odd-mass nuclei in the island of inversion. As we do not include continuum effects, our focus is on the nuclei and states that are sufficiently well-bound and below the neutron separation energy.
We build on the recent ab initio computations of rotational bands <cit.>, although we follow an approach that is conceptually much simpler. Instead of working with wave functions of good angular momentum, as done in the spherical (no-core or symmetry-adapted) shell models, we start from symmetry-breaking mean-field states and perform angular-momentum projections at the end of the computation <cit.>. The idea for computing odd-mass nuclei consists of putting the odd nucleon into a single-particle orbital with spin/parity K^π, following the Nilsson model <cit.>. Applying angular-momentum projection yields a rotational band with a head that has the nuclear spin/parity I^π=K^π. As the spherical shell model <cit.> guided coupled-cluster computations for doubly-magic nuclei <cit.>, the Nilsson model becomes our guide for deformed nuclei. Single-reference methods such as coupled-cluster theory then start from axially symmetric mean-field states and include the short-range (dynamical) correlations that yield the bulk of the binding energy <cit.>. The angular-momentum projection then includes long-range (static) correlations and yields the states of the rotational band. For even-even nuclei, this approach has propelled ab initio computations of deformed nuclei into the mass A≈ 80 region <cit.>.
This paper is organized as follows.
In Sect. <ref> we present the methods, i.e. the Hamiltonian and details about the angular-momentum projection. Section <ref> presents a brief summary of the Nuclear Tensor Contraction Library (NTCL) that allows us to perform ab initio computations at leadership-class computing facilities. We present our results in Sect. <ref> and a summary in Sect. <ref>.
§ METHOD
Our single-particle basis consists of states from the spherical harmonic oscillator with spacing ħω and maximum energy (N_ max+3/2)ħω. For the neon, sodium, and magnesium nuclei, we use N_ max=8 and ħω=14 MeV. While this is not sufficient to obtain converged ground-state energies, such model spaces are large enough to accurately capture rotational bands <cit.>.
We want to work in the normal-ordered two-body approximation <cit.> to avoid dealing with residual three-nucleon forces in the coupled-cluster method. This is only valid if the resulting normal-ordered two-body Hamiltonian is a scalar under rotations. However, a deformed reference state breaks rotational invariance of the normal-ordered Hamiltonian,
which means that one cannot simply perform a symmetry-breaking Hartree-Fock calculation and then employ the normal-ordered two-body approximation. Instead, we follow Frosini:2021tuj and perform a spherical Hartree-Fock calculation, where the employed density matrix is a scalar under rotation, by using a fractional filling of the valence shells. We perform the Hartree-Fock computation using such spherical density matrices and this yields a spherical single-particle basis. The Hamiltonian is then normal-ordered and truncated at the two-body level. This completes the first step.
We then use this Hamiltonian, back transformed to the particle vacuum, in the spherical harmonic oscillator basis and employ a symmetry-breaking density matrix that reflects the expected occupation of Nilsson orbitals. This means that we fill pairs of nucleons in time-reversed single-particle states and place the odd nucleon such that its angular momentum projection J_z (taken to be positive) and parity determine the quantum numbers K^π. We use the laboratory z axis as the symmetry axis of the deformed nucleus. Instead of being guided by the Nilsson diagram, one can also add a mass-quadrupole constraint to the Hamiltonian and map out the Hartree-Fock energy as a function of the quadrupole moment. This is particularly useful for nuclei where neutrons fill the traditional N=20 shell. We used both approaches to obtain reference states of interest. These procedures are, of course, also well known from mean-field computations <cit.>.
The result of these procedures is an axially symmetric reference state, with spin/parity K^π, that can be written as
|Φ⟩≡∏_i=1^A â_i^† |0⟩ .
Here, â_p^† creates a nucleon in the state labelled by p, i.e. |p⟩ = â_p^†|0⟩ and the vacuum is |0⟩. The single-particle states have good angular-momentum projection J_z, good parity, and isospin. Starting with Eq. (<ref>), we follow the convention that subscripts i,j,k,… label single-particle states occupied in the reference state and a,b,c,… label unoccupied states. We use labels p,q,r,… when no distinction is made.
Coupled-cluster theory <cit.> writes the ground-state as
|Ψ⟩ = e^T̂ |Φ⟩ .
Here, the cluster operator
T̂ =T̂_1 + T̂_2 + T̂_3 + …
consists of one-particle–one-hole (1p-1h), two-particle–two-hole (2p-2h) excitations
T̂_1 ≡∑_ia t_i^a â_a^†â_i ,
T̂_2 ≡1/4∑_ijab t_ij^abâ_a^†â_b^†â_j â_i ,
and of excitations with higher rank up to and including A-particle–A-hole (Ap-Ah). For most of the paper, we will limit ourselves to including up to 2p-2h excitations. This is the coupled-cluster singles and doubles (CCSD) approximation, and one has to solve a set of nonlinear equations to determine the amplitudes t_i^a and t_ij^ab for a given Hamiltonian <cit.>.
As the reference state is axially symmetric, |Ψ⟩ breaks rotational invariance; it would take up to Ap-Ah excitations to restore the symmetry within this formalism, and that is computationally not attractive. However, the CCSD approximation yields about 90% of the correlation energy and, in particular, includes short-range two-body correlations <cit.>. This yields the bulk of the nuclear binding energy <cit.>. In contrast, symmetry restoration includes long-range correlations, yields small energy gains for the ground state and reproduces the small spacings within a rotational band <cit.>.
For the angular momentum projection, we build on the work by qiu2017 and follow Refs. <cit.>.
As coupled-cluster theory is bi-variational <cit.>, we use the energy functional
E_J,K≡⟨Ψ|P̂_J,KĤ|Ψ⟩/⟨Ψ|P̂_J,K|Ψ⟩
to compute the energy E_J,K of the state with total angular momentum J and axial projection K. Here, ⟨Ψ|≡⟨Φ| (1 + Λ)e^-T̂ is the left ground-state, and P̂_J,K denotes the operator
P̂_J,K = 2J+1/2∫_0^π dβsinβ d^J_KK(β) R̂(β)
that projects onto angular momentum J with axial projection K (and z-axis projection K). The rotation operator is
R̂(β) = e^-iβĴ_y, and we employed the Wigner d^J_KK(β) function <cit.>.
It is convenient to rewrite the energy (<ref>) in terms of the Hamiltonian and norm kernels
H(β) ≡⟨Ψ|R̂(β) Ĥ|Ψ⟩ ,
N(β) ≡⟨Ψ|R̂(β) |Ψ⟩ .
as
E_J,K=∫_0^π dβsinβ d^J_KK(β) H(β)/∫_0^π dβsinβ d^J_KK(β) N(β) .
To evaluate the kernels (<ref>) one inserts the identity R̂(β)R̂^-1(β) and uses Thouless theorem <cit.> to rewrite
⟨Φ|R̂(β) = ⟨Φ|R̂(β)|Φ⟩⟨Φ| e^-V̂(β) .
Here V̂(β) is a 1p-1h de-excitation operator. After inserting the identity e^-V̂e^V̂ and performing the associated basis transformation, one needs to compute e^V̂e^T̂|Φ⟩. It is currently not known how to do that efficiently without approximations. The disentangled method proposed by qiu2017 can be used for this purpose, but it does not preserve symmetries in the
energy and norm kernels <cit.>. Instead, we follow Ref. <cit.> and expand
e^λV̂e^T̂=e^W_0(λ)+Ŵ_1(λ)+Ŵ_2(λ)+… .
Here, the right-hand side consists of the function W_0 and the np-nh excitation operators Ŵ_n, with n=1,…,A. To keep matters computationally tractable, we approximate the right-hand side by only keeping W_0, the 1p-1h operators Ŵ_1, and the 2p-2h operators Ŵ_2 while discarding higher-ranking particle-hole excitation operators. Taking the derivative of Eq. (<ref>) with respect to λ yields a differential equation which we solve by integrating from λ=0 (where W_0=0 and Ŵ_i=T̂_i) to λ=1.
We note that the application of Thouless theorem <cit.> is limited to non-vanishing vacuum kernels. Thus, it is necessary that the states |Φ⟩ and R̂(β)|Φ⟩ have a finite overlap ⟨Φ|R̂(β)|Φ⟩≠ 0. In odd-mass nuclei, this overlap vanishes at β=π where the unpaired orbital becomes its time-reversed partner. The expression V(β) in Eq. (<ref>) then becomes singular. In the vicinity of the point β=π (and whenever overlaps are becoming exceedingly small in magnitude), we use a singular value decomposition <cit.> when computing a matrix inverse that enters the construction of V(β) <cit.> and avoid the calculation at β=π.
Figure <ref> shows the norm kernel 𝒩(β) and the Hamiltonian kernel ℋ(β) of the K^π=3/2^- reference state for the nucleus ^9Be using the NNLO_opt interaction <cit.>. The kernels fulfill N(2π-β) = - N(β) and H(2π-β) = - H(β), and we only show the nontrivial part. The energy of the unprojected state is H(0), and we have N(0)=1.
§ NUCLEAR TENSOR CONTRACTION LIBRARY
We calculated the results presented in this work using the NTCL <cit.>. This domain-specific, architecture-independent Fortran library runs efficiently at scale on Frontier <cit.>, the DOE flagship supercomputer located at the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory. Frontier is an HPE Cray EX supercomputer with a theoretical peak double-precision performance of approximately two exaflops, consisting of 9408 AMD compute nodes, each with one 64-core AMD “Optimized 3rd Gen EPYC” CPU, 512 GB of DDR4 memory, four AMD MI250X GPU’s, and 512GB of high-bandwidth memory (HBM2E) directly on the GPU’s. NTCL makes the hardware on Frontier transparent to the user by presenting a hardware-independent application programming interface to the user, where we have implemented the core computationally expensive operations in hardware-dependent plugins selected when compiling the library.
For the calculations presented in this work, NTCL offloads matrix multiplications to the GPUs by intercepting calls to the *gemm matrix-multiplication subroutines from the BLAS <cit.> library and replacing them with calls to the rocBLAS library appropriate for Frontier. Since the performance of the projected coupled-cluster code is mainly dependent on efficient tensor contractions that are written as a combination of tensor permutations and matrix multiplication, using NTCL in this way allows us to use Frontier at scale with minimal changes to the projected coupled-cluster code.
To replace BLAS *gemm calls with rocBLAS *rocBLAS, NTCL has an interface, ntcl_gemm, that is designed to have exactly the same signature as BLAS *gemm. The simplest way to use it is to insert use :: algorithms_api, only : dgemm=>ntcl_gemm at the top of each Fortran module. NTCL has internal mechanisms to select which matrix-multiplication routines to use and to transfer data from RAM to GPU memory.
NTCL utilizes the factory pattern to decide what routines to use for a given system. Specifically for matrix multiplication, we have written an abstract class matrix_multiplication that gives a simple-to-use but general interface for matrix multiplication. For each hardware architecture supported by NTCL, we write a separate class that extends matrix_multiplication as a plugin for that hardware. The correct hardware implementation is then selected by calling a factory class that knows which plugins are available for the system at hand. For example, we have implemented a specific extension of the matrix_multiplication abstract class that uses rocBLAS and is activated for systems with AMD GPUs. This rocBLAS plugin has been tested and optimized for Frontier.
The matrix data sent to the NTCL-gemm interface is stored in RAM and needs to be copied to the GPU before the rocBLAS *gemm routines can be called. NTCL has an internal memory management system that can seamlessly handle heterogeneous memory
architectures, i.e., systems with more than one memory pool, most commonly RAM and GPU memory. This is done by once again utilizing the factory pattern; we have an abstract class representing a general memory pool, we have extensions for each type of memory pool, and a factory class is used to select a specific memory pool. These memory pool classes can then be used to easily transfer data from one type of memory to another, if necessary.
In addition to the NTCL-gemm interface, NTCL supports general tensor contractions. In this case, NTCL provides a tensor class that represents a general dense tensor that can either be stored in RAM or GPU memory, which allows the program to keep all the tensors in GPU memory throughout the calculation and only copy them back to RAM when the calculations are done. While this functionality is easy to use, significant work would still be required to translate the existing code. The gemm interface provides a stepping stone that allows you to quickly use GPUs for matrix multiplications, but using the tensor classes to store data in GPU memory is crucial for optimal performance.
In Fig. <ref>, we have plotted the execution time of an n× n matrix-multiplication, performed using the NTCL-gemm interface (green stars), the NTCL tensor class (purple plusses), OpenBLAS (red circles), and rocBLAS (yellow triangles). While the NTCL-gemm interface is significantly faster than running the pure CPU OpenBLAS dgemm, both the NTCL tensor class and rocBLAS versions are even faster still. This is because the matrix data is already in GPU memory before the matrix multiplication occurs in the latter two cases. The execution time of the NTCL-gemm interface is dominated by data transfer from RAM to GPU memory and back again. However, this benchmark illustrates that even when data is transferred back and forth between GPU memory and RAM, there is still a significant gain over the regular version.
§ RESULTS
§.§ Benchmarks and comparisons
We start with benchmark computations of rotational bands in ^9Be. To compare with previous no-core shell model computations <cit.>, we use the nucleon-nucleon potential NNLO_ opt <cit.>. To quantify success, we take uncertainty estimates from the computations of even neon and magnesium isotopes in Ref. <cit.>, where the excitation energies of the 2^+ and 4^+ states were assigned uncertainties of 20% and 15%, respectively. Thus, computed moments of inertia in even-even nuclei have an uncertainty of about 20%, and we will use that when judging agreement with data in what follows without showing uncertainty bands. Regarding energy differences between band heads, we will assume that theoretical uncertainties are about 1 MeV. This estimate comes from the energy difference for band-head references computed with N_ max=8 and 12 in Ref. <cit.>.
Figure <ref> shows the angular-momentum-projected results from coupled-cluster theory (computed in model spaces with N_ max =6 and 8), compared to the experimental value and to computations using the no-core shell model.
We see that the K^π=3/2^- ground-state band is accurate when compared to data and the no-core shell model benchmark. Here, the reference state is computed by starting with the odd neutron in the J_z=3/2 state of the p_3/2 shell. For the K=1/2^- band, the odd neutron is in the J_z=1/2 state of the p_1/2 shell, and for the K=1/2^+ band, the odd neutron is in the J_z=1/2 state of the d_5/2 shell. The head of the K=1/2^- band is at an accurate excitation energy when compared to data, but the the band's moment of inertia is too large. The no-core shell model results are more accurate. It could be that we are in a multi-reference situation for the K=1/2^- band because one could also consider making a hole in the J_z=1/2 state of the occupied p_3/2 shell.
The current implementation of the projected coupled-cluster method neglects such a potential mixing of different configurations.
Our calculations accurately reproduce the K=1/2^+ band in ^9Be. Overall the coupled-cluster computations are in good-to-fair agreement with benchmarks from the no-core shell model and with data.
In what follows, we use the interaction 1.8/2.0(EM) of Ref. <cit.> that is accurate for binding energies and spectra <cit.>. This interaction consists of nucleon-nucleon and three-nucleon forces. The two-nucleon force is from Ref. <cit.>, evolved with the similarity renormalization group <cit.> to a cutoff of 1.8 fm^-1. The three-nucleon force consists of the leading contributions from chiral effective field theory <cit.>. Its cutoff is 2.0 fm^-1 and the low-energy constants c_D and c_E were adjusted to reproduce properties of nuclei with mass numbers A=3,4.
We start with the nucleus ^21Ne, for which we can compare and contrast our approach to that by lin2024. They used the projected generator coordinate method (PGCM) in an ab initio setting <cit.>. Their reference states resulted from Hartree-Fock-Bogoliubov computations of the neighboring even-even nuclei ^20,22Ne. The ^21Ne nucleus was then computed as a quasi-particle excitation of these references. Allowing for different quadrupole deformations of the reference state and projecting onto good particle numbers and angular momentum yielded rotational bands. This approach only captures long-range correlations and does not reproduce binding energies. They found a binding energy of about 119 MeV, compared to our 159 MeV and the experimental 161 MeV. The binding energy and spectrum for ^21Ne remained practically unchanged if ^20Ne or ^22Ne was used as a reference. In Fig. <ref>, we compare their results from the ^20Ne reference to ours and to data. Our approach is more accurate when comparing the ground-state band to data and also regarding the excitation energy of the K^π=1/2^+ band head. However, the PGCM computation yields a more accurate moment of inertia for the 1/2^+ band. We note that the calculations also reveal a J^π=7/2^+ state in the K^π=1/2^+ band; the energy is 6.6 MeV for CCSD and 10.1 MeV for the PGCM; however, the data tables <cit.> did not place a 7/2^+ level into this band.
§.§ Results for Na nuclei
We computed all sodium isotopes by placing the unpaired proton at [d_5/2, j_z=3/2] but allowed the neutrons to select different configurations through the self-consistent mean field.
In a broad range of quadruple deformations (β_2), the Nilsson diagram suggests a single prolate configuration for protons with Z=11, see Fig. <ref>.
The f_7/2 intruder state can become dominant (by forming a K^π=1/2^- state) only at larger prolate deformations.
We assume that all neutrons are paired in time-reversed orbitals and that the calculated odd-mass sodium isotopes have a K^π=3/2^+ band. Although neutrons do not contribute to the spin and parity of the band head, different configurations of neutrons can yield K^π=3/2^+ bands with different deformations. In this paper, we focus on the lowest bands with a given spin and parity and neglect any mixing of different deformations.
Figure <ref> shows the calculation of the ground state bands of sodium isotopes. For neutron numbers N=10 and 12 the Nilsson model suggests that two and four neutrons occupy the d_5/2 for ^21Na and ^23Na respectively. Our calculations reproduce the K^π = 3/2^+ bands and agree well with data. We also find that the odd-mass sodium isotopes beyond N=20 exhibit a similar rotional structure, consistent with the even-even neighbors <cit.>. We predict that ^37Na should have a similar band structure as ^35Na.
In ^25,27,29Na our computations failed to reproduce the near degeneracy of 5/2^+ and 3/2^+ states, and instead produced a K^π = 3/2^+ rotational band similar to those of the isotopes shown in Fig. <ref>. It seems that ^25,27,29Na exhibit a more complicated structure. First, the data show a 5/2^+ state that is close to the 3/2^+ within less than a hundred keV, implying that these nuclei may not be perfect rotors. Second, the low-lying 1/2^+ states in ^25,27Na suggest either a possible K^π = 1/2^+ band or that the neutrons are not paired and may yield a non-zero contribution to the spin.
Finally, the complicated structure of ^25,27,29Na could also be a result of quasi-degenerate neutron configurations due to the level crossing of N=14 and 16, see Figure <ref>. This possibility is also reflected in the Hartree-Fock calculations of these nuclei. Let us take ^25Na as an example. The normal filling for neutrons is a completely filled d_5/2 shell, with energies E_3/2^+=-138.5 MeV and E_5/2^+=-137.9 MeV from filling the odd proton into different orbitals. At a larger deformation, the neutron s_1/2 becomes energetically favorable and a robust local minimum is formed in the energy potential surface, yielding energies E_3/2^+=-139.7 MeV and E_5/2^+=-139.0 MeV. Thus, configuration mixing seems possiible. We also investigated the possible excitation of the proton configuration to obtain a K^π = 1/2^+ band. The resulting energy is E_1/2^+=-138.5 MeV and E_3/2^+-=137.7 MeV. The proximity of the energies of three different bands suggests that a multi-configuration approach is called for. Doubting the accuracy of our single-reference approach, we do not show results for ^25,27,29Na.
We note that other theoretical approaches are also challenged to accurately compute ^25,27,29Na. The shell-model calculations by <cit.> compare results from several contemporary interactions with data and with the highly optimized USD <cit.> interaction. Only the latter is able to accurately describe ^25,27,29Na. Thus, there might also be a deficiency in the interaction we employ.
§.§ Results for Ne and Mg nuclei
Our calculations of the odd-mass neon and magnesium isotopes follow a similar procedure to that of sodium, but the spin and parity of the nuclei are now determined by the configurations of the unpaired neutron. The Nilsson diagram suggests that the protons have a simplified single configuration for N=10 and N=12 (see Figure <ref>), i.e. one fills two and four neutrons in the d_5/2 for neon and magnesium, respectively. Multiple configurations are possible for the neutrons because of intruders from the pf shell. In the even isotopes, the intruder states start to be involved from N=14, where a second 0^+ state gets closer to the ground state and becomes dominant at N=20, resulting in the shape coexistence of ^30Ne and ^32Mg <cit.>. The intruder states may persist until N=22 and disappear thereafter, giving nuclei beyond N=20 a good single-reference character. One then expects rotationbal bands with similar moments of inertia.
Figures <ref> and <ref> show our results for the odd-mass neon and magnesium nuclei, respectively. We obtain similar band structures for nuclei with the same number of neutrons. This is expected since the spin and parities of the band heads are determined by the odd neutron. Our calculation starts at N=9 where the last neutron fills the [d_5/2,j_z=1/2]. This yields the K^π=1/2^+ bands in ^19Ne and ^21Mg, and our results agree with the data. The excited K^π=1/2^- state is obtained by exciting a neutron from p_1/2 to d_5/2; this could also be seen as going beyond a level crossing in the Nilsson diagram by increasing the quadruple deformation. We also find a K^π=5/2^+ band in ^21Mg by exciting the last neutron to the J_z=5/2 of the d_5/2 shell, corresponding to an oblate deformed configuration.
At neutron number N=11, one can place the odd neutron in either [d_5/2,j_z=3/2] or [d_5/2,j_z=1/2] to get the K^π=3/2^+ and K^π=1/2^+ band heads, respectively. Our calculations accurately reproduce the data for ^21Ne and ^23Mg. A negative parity band with K^π=1/2^- can be obtained by filling the last neutron in the [f_1/2,j_z=1/2]. However, the resulting band is too high in energy for ^23Mg (and not shown in Fig. <ref> for ^21Ne). We see a K^π=5/2^+ band close to the K^π=1/2^+ band in ^21Ne.
For the N=13 nuclei, we start by filling the odd neutron
in [d_5/2,j_z=5/2]. This yields the K^π=5/2^+ ground state bands for ^23Ne and ^25Mg and is in agreement with the data.
According to the Nilsson diagram, the [s_1/2,j_z=1/2] state could be filled at a larger deformation, leading to the K^π=1/2^+ bands. Our calculations reproduce the moments of inertia for the K^π=5/2^+ and K^π=1/2^+ bands in both nuclei. However, our calculated K^π=5/2^+ band for ^25Mg is too high in energy.
For even larger deformations, the Nilsson diagram indicates that the intruder state [f_7/2,j_z=1/2] is favored in energy, and the angular-momentum projection yields the corresponding negative-parity band. Our calculation is accurate for the moments of inertia, but the band head again is too high in energy for ^25Mg. We note here that the National Nuclear Data Center <cit.> groups states into bands for ^25Mg. However, the 1/2^- state is assigned to a different band than we suggest in Fig. <ref>.
As the neutron number is further increased, the neutron separation energies decrease and one slowly approaches the neutron dripline. Separation energies are shown as purple dashed horizontal lines in Figs. <ref> and <ref> for neutron numbers N=15, 17, and 19. As our calculations do not use a Gamow basis, the calculations of these nuclei become less reliable. Nevertheless, there is a good-to-fair agreement with data on low-lying states in these nuclei, and one can easily imagine that continuum effects will lower the energy of band heads that are closer to the neutron separation energy.
At neutron number N=15 the unpaired neutron is filled in [s_1/2,j_z=1/2], yielding a K^π=1/2^+ band in both ^25Ne and ^27Mg. The K^π=5/2^+ appears as the level crossing of s_1/2 and d_5/2,j_z=± 5/2. The Nilsson model also suggests a level crossing of [f_7/2,j_z=± 1/2] and [d_5/2,j_z=± 5/2], which result in a negative parity band. However, this is not true at the drip line because the nearby continuum favors the p-wave (with its small centrifugal barrier) over the f-wave. Our calculations do not include continuum effects, and the states near and above the threshold are not accurate. The calculated negative-parity band in ^25Ne is above the threshold. In ^27Mg we find a negative parity band below the threshold and it agrees with data.
The N=17 nuclei mainly have two configurations available, namely the normal K^π=1/2^+ band by placing the odd neutron in [d_3/2,j_z=1/2], and the intruder state f orbital yielding a K^π=1/2^- band. The data suggest that the 3/2^- state is sufficiently bound, and our calculations without continuum are reliable here. We reproduced the K^π=1/2^- in both ^27Ne and ^29Mg.
For N=19, the Nilsson diagram suggests a series of level crossings between d_3/2 and f_7/2. Our calculation reproduces the correct band heads K^π=3/2^+ and K^π=1/2^- in both ^29Ne and ^31Mg. We also find a K^π=1/2^+ band in ^31Mg, corresponding to the larger deformed ^30Mg plus one neutron at [d_3/2,j_z=1/2].
While neglecting continuum effects has prevented us from making quantitatively accurate predictions for neon and magnesium nuclei beyond N=20, studying rotational bands below the threshold would be interesting. Figure <ref> shows our calculated bands for ^33Mg and ^35Mg and compared with the available data.
We obtained the two lowest bands for both nuclei. For ^33Mg, pairs of neutrons occupy the [f_7/2,j_z=±1/2] and [d_3/2,j_z=±1/2] to form a deformed N=20 shell. The unpaired neutron can be in [f_7/2,j_z=±3/2] or [d_3/2,j_z=±3/2], yielding the K^π=3/2^- and K^π=3/2^+ bands, respectively.
Our calculations show that both bands exhibit a rigid-rotor pattern. However, our calculations failed to reproduce the correct spin of the ground-state band, presumably because they lack continuum effects.
§ SUMMARY
We presented ab initio computation of odd mass nuclei in the island of inversion using the projected coupled-cluster theory. We computed the band heads of interest by placing the unpaired nucleon in different single-particle orbits and were guided by the Nilsson diagram. Our calculation of ^9Be meets benchmarks from the no-core shell model and data. We investigated the low-lying spectrum of stable odd-mass Ne, Na, and Mg nuclei and found overall good agreement with data where band heads can be approximated well by single-reference states. This also allowed us to put states into rotational bands and to predict a few spin/parity assignments.
We thank Mark Caprio and Jiangming Yao for sharing their results with us for benchmarking.
This work was supported by the U.S. Department of Energy, Office of
Science, Office of Nuclear Physics, under Award No. DE-FG02-96ER40963, by SciDAC-5 (NUCLEI collaboration), and by the Quantum Science Center, a National Quantum Information Science Research Center of the U.S. Department of Energy. Computer time was provided by the Innovative and
Novel Computational Impact on Theory and Experiment (INCITE)
programme. This research used resources of the Oak Ridge Leadership
Computing Facility located at Oak Ridge National Laboratory, which is
supported by the Office of Science of the Department of Energy under
contract No. DE-AC05-00OR22725.
71
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Thibault et al.(1975)Thibault, Klapisch, Rigaud, Poskanzer, Prieels, Lessard, and Reisdorf]thibault1975
author author C. Thibault, author R. Klapisch,
author C. Rigaud, author A. M. Poskanzer, author
R. Prieels, author L. Lessard, and author W. Reisdorf, title title Direct measurement of the masses of ^11Li and
^2632Na with an on-line mass spectrometer, 10.1103/PhysRevC.12.644 journal journal Phys. Rev. C volume 12, pages 644–657 (year 1975)NoStop
[Campi et al.(1975)Campi,
Flocard, Kerman, and Koonin]campi1975
author author X. Campi, author H. Flocard,
author A. K. Kerman, and author S. Koonin, title title Shape transition in the neutron rich
sodium isotopes, 10.1016/0375-9474(75)90065-2 journal journal Nuclear Physics A volume 251, pages 193–205 (year
1975)NoStop
[Détraz et al.(1979)Détraz, Guillemaud, Huber, Klapisch, Langevin, Naulin, Thibault, Carraz, and Touchard]detraz1979
author author C. Détraz, author D. Guillemaud, author G. Huber,
author R. Klapisch, author M. Langevin, author
F. Naulin, author C. Thibault, author L. C. Carraz, and author F. Touchard, title title Beta decay of ^2732Na and their
descendants, 10.1103/PhysRevC.19.164 journal journal Phys. Rev. C volume
19, pages 164–176 (year 1979)NoStop
[Poves and Retamosa(1987)]poves1987
author author A. Poves and author J. Retamosa, title title The onset of
deformation at the n = 20 neutron shell closure far from stability, 10.1016/0370-2693(87)90171-7 journal journal Phys. Lett. B volume 184, pages 311 – 315 (year 1987)NoStop
[Warburton et al.(1990)Warburton, Becker, and Brown]warburton1990
author author E. K. Warburton, author J. A. Becker, and author B. A. Brown, title title Mass systematics
for a=29–44 nuclei: The deformed a32 region, 10.1103/PhysRevC.41.1147 journal journal Phys. Rev. C volume 41, pages 1147–1166 (year 1990)NoStop
[Baumann et al.(2007)Baumann, Amthor, Bazin, Brown, Folden, Gade, Ginter, Hausmann, Matos, Morrissey, Portillo, Schiller,
Sherrill, Stolz, Tarasov, and Thoennessen]baumann2007
author author T. Baumann, author A. M. Amthor,
author D. Bazin, author B. A. Brown, author
C. M. Folden, author
A. Gade, author T. N. Ginter, author M. Hausmann, author M. Matos, author D. J. Morrissey, author M. Portillo, author A. Schiller,
author B. M. Sherrill, author A. Stolz, author
O. B. Tarasov, and author
M. Thoennessen, title
title Discovery of ^40Mg and
^42Al suggests neutron drip-line slant towards heavier
isotopes, 10.1038/nature06213 journal
journal Nature volume 449, pages 1022 – 1024 (year 2007)NoStop
[Schwerdtfeger et al.(2009)Schwerdtfeger, Thirolf, Wimmer,
Habs, Mach, Rodriguez,
Bildstein, Egido, Fraile,
Gernhäuser, Hertenberger, Heyde, Hoff, Hübel, Köster, Kröll, Krücken, Lutter, Morgan, and Ring]schwerdtfeger2009
author author W. Schwerdtfeger, author P. G. Thirolf, author K. Wimmer,
author D. Habs, author
H. Mach, author T. R. Rodriguez, author V. Bildstein, author J. L. Egido, author L. M. Fraile, author R. Gernhäuser, author R. Hertenberger, author K. Heyde,
author P. Hoff, author
H. Hübel, author U. Köster, author T. Kröll, author R. Krücken, author R. Lutter, author T. Morgan, and author P. Ring, title title
Shape coexistence near neutron number n=20: First identification of the
e0 decay from the deformed first excited J^=0^+
state in ^30Mg, 10.1103/PhysRevLett.103.012501 journal journal
Phys. Rev. Lett. volume 103, pages
012501 (year 2009)NoStop
[Doornenbal et al.(2009)Doornenbal, Scheit, Aoi, Takeuchi, Li, Takeshita, Wang, Baba, Deguchi, Fukuda, Geissel, Gernhäuser,
Gibelin, Hachiuma, Hara,
Hinke, Inabe, Itahashi,
Itoh, Kameda, Kanno,
Kawada, Kobayashi, Kondo,
Krücken, Kubo, Kuboki,
Kusaka, Lantz, Michimasa,
Motobayashi, Nakamura, Nakao,
Namihira, Nishimura, Ohnishi,
Ohtake, Orr, Otsu,
Ozeki, Satou, Shimoura,
Sumikama, Takechi, Takeda,
Tanaka, Tanaka, Togano,
Winkler, Yanagisawa, Yoneda,
Yoshida, Yoshida, and Sakurai]doornenbal2009
author author P. Doornenbal, author H. Scheit,
author N. Aoi, author
S. Takeuchi, author
K. Li, author E. Takeshita, author H. Wang, author H. Baba, author S. Deguchi,
author N. Fukuda, author H. Geissel, author
R. Gernhäuser, author
J. Gibelin, author I. Hachiuma, author Y. Hara, author C. Hinke, author N. Inabe,
author K. Itahashi, author S. Itoh, author
D. Kameda, author S. Kanno, author Y. Kawada, author N. Kobayashi, author Y. Kondo, author R. Krücken, author T. Kubo, author T. Kuboki, author K. Kusaka,
author M. Lantz, author S. Michimasa, author
T. Motobayashi, author
T. Nakamura, author
T. Nakao, author K. Namihira, author S. Nishimura, author T. Ohnishi, author M. Ohtake, author N. A. Orr, author H. Otsu, author K. Ozeki,
author Y. Satou, author S. Shimoura, author
T. Sumikama, author
M. Takechi, author H. Takeda, author K. N. Tanaka, author K. Tanaka, author Y. Togano,
author M. Winkler, author Y. Yanagisawa, author
K. Yoneda, author A. Yoshida, author K. Yoshida, and author H. Sakurai, title title Spectroscopy of ^32Ne and the “island of
inversion”, 10.1103/PhysRevLett.103.032501 journal journal Phys. Rev. Lett. volume 103, pages 032501 (year
2009)NoStop
[Doornenbal et al.(2016)Doornenbal, Scheit, Takeuchi, Aoi, Li, Matsushita, Steppenbeck, Wang, Baba, Ideguchi, Kobayashi, Kondo, Lee, Michimasa, Motobayashi, Poves, Sakurai, Takechi, Togano, and Yoneda]doornenbal2016
author author P. Doornenbal, author H. Scheit,
author S. Takeuchi, author N. Aoi, author
K. Li, author M. Matsushita, author D. Steppenbeck, author H. Wang, author H. Baba, author E. Ideguchi,
author N. Kobayashi, author Y. Kondo, author
J. Lee, author S. Michimasa, author T. Motobayashi, author A. Poves, author H. Sakurai, author M. Takechi, author Y. Togano, and author K. Yoneda, title title
Mapping the deformation in the “island of inversion”: Inelastic scattering
of ^30Ne and ^36Mg at intermediate energies, 10.1103/PhysRevC.93.044306 journal journal Phys. Rev. C volume 93, pages 044306 (year 2016)NoStop
[Wimmer et al.(2010)Wimmer,
Kröll, Krücken, Bildstein, Gernhäuser, Bastin,
Bree, Diriken, Van Duppen,
Huyse, Patronis, Vermaelen,
Voulot, Van de Walle, Wenander, Fraile, Chapman, Hadinia, Orlandi, Smith, Lutter, Thirolf, Labiche, Blazhev, Kalkühler, Reiter, Seidlitz, Warr, Macchiavelli, Jeppesen, Fiori, Georgiev, Schrieder, Das Gupta, Lo Bianco,
Nardelli, Butterworth, Johansen, and Riisager]wimmer2010
author author K. Wimmer, author T. Kröll,
author R. Krücken, author V. Bildstein, author
R. Gernhäuser, author
B. Bastin, author N. Bree, author J. Diriken, author P. Van Duppen,
author M. Huyse, author N. Patronis, author
P. Vermaelen, author
D. Voulot, author J. Van de Walle, author F. Wenander, author L. M. Fraile, author R. Chapman, author B. Hadinia,
author R. Orlandi, author J. F. Smith, author
R. Lutter, author P. G. Thirolf, author M. Labiche, author A. Blazhev, author M. Kalkühler, author P. Reiter, author M. Seidlitz, author N. Warr, author A. O. Macchiavelli, author H. B. Jeppesen, author E. Fiori, author G. Georgiev,
author G. Schrieder, author S. Das Gupta, author
G. Lo Bianco, author
S. Nardelli, author
J. Butterworth, author
J. Johansen, and author
K. Riisager, title title Discovery of the shape coexisting 0^+ state in
^32Mg by a two neutron transfer reaction, 10.1103/PhysRevLett.105.252501 journal journal
Phys. Rev. Lett. volume 105, pages
252501 (year 2010)NoStop
[Ahn et al.(2019)Ahn,
Fukuda, Geissel, Inabe,
Iwasa, Kubo, Kusaka,
Morrissey, Murai, Nakamura,
Ohtake, Otsu, Sato,
Sherrill, Shimizu, Suzuki,
Takeda, Tarasov, Ueno,
Yanagisawa, and Yoshida]ahn2019
author author D. S. Ahn, author N. Fukuda,
author H. Geissel, author N. Inabe, author
N. Iwasa, author T. Kubo, author K. Kusaka, author D. J. Morrissey, author D. Murai,
author T. Nakamura, author M. Ohtake, author
H. Otsu, author H. Sato, author B. M. Sherrill, author Y. Shimizu, author H. Suzuki,
author H. Takeda, author O. B. Tarasov, author
H. Ueno, author Y. Yanagisawa, and author K. Yoshida, title title Location of the neutron dripline at fluorine and neon, 10.1103/PhysRevLett.123.212501 journal journal Phys. Rev. Lett. volume 123, pages 212501 (year 2019)NoStop
[Crawford et al.(2019)Crawford, Fallon, Macchiavelli,
Doornenbal, Aoi, Browne,
Campbell, Chen, Clark,
Cortés, Cromaz, Ideguchi,
Jones, Kanungo, MacCormick,
Momiyama, Murray, Niikura,
Paschalis, Petri, Sakurai,
Salathe, Schrock, Steppenbeck, Takeuchi, Tanaka,
Taniuchi, Wang, and Wimmer]crawford2019
author author H. L. Crawford, author P. Fallon,
author A. O. Macchiavelli,
author P. Doornenbal, author N. Aoi, author
F. Browne, author C. M. Campbell, author S. Chen, author R. M. Clark, author M. L. Cortés, author M. Cromaz,
author E. Ideguchi, author M. D. Jones, author
R. Kanungo, author M. MacCormick, author S. Momiyama, author I. Murray, author M. Niikura, author S. Paschalis, author M. Petri, author H. Sakurai, author M. Salathe, author P. Schrock, author D. Steppenbeck, author S. Takeuchi, author Y. K. Tanaka, author R. Taniuchi, author H. Wang, and author K. Wimmer, title title First spectroscopy of the
near drip-line nucleus ^40Mg, 10.1103/PhysRevLett.122.052501 journal journal
Phys. Rev. Lett. volume 122, pages
052501 (year 2019)NoStop
[Tsunoda et al.(2020)Tsunoda, Otsuka, Takayanagi, Shimizu, Suzuki, Utsuno, Yoshida, and Ueno]tsunoda2020
author author Naofumi Tsunoda, author Takaharu Otsuka, author Kazuo Takayanagi, author Noritaka Shimizu, author Toshio Suzuki, author Yutaka Utsuno,
author Sota Yoshida, and author Hideki Ueno, title title The impact of nuclear shape on the
emergence of the neutron dripline, 10.1038/s41586-020-2848-x journal journal
Nature volume 587, pages 66–71
(year 2020)NoStop
[Ahn et al.(2022)Ahn,
Amano, Baba, Fukuda,
Geissel, Inabe, Ishikawa,
Iwasa, Komatsubara, Kubo,
Kusaka, Morrissey, Nakamura,
Ohtake, Otsu, Sakakibara,
Sato, Sherrill, Shimizu,
Sumikama, Suzuki, Takeda,
Tarasov, Ueno, Yanagisawa, and Yoshida]ahn2022
author author D. S. Ahn, author J. Amano, author H. Baba, author
N. Fukuda, author H. Geissel, author N. Inabe, author S. Ishikawa, author N. Iwasa, author T. Komatsubara, author T. Kubo, author K. Kusaka, author D. J. Morrissey, author T. Nakamura,
author M. Ohtake, author H. Otsu, author
T. Sakakibara, author
H. Sato, author B. M. Sherrill, author Y. Shimizu, author T. Sumikama, author H. Suzuki, author H. Takeda, author O. B. Tarasov, author H. Ueno, author Y. Yanagisawa, and author K. Yoshida, title title Discovery of
^39Na, 10.1103/PhysRevLett.129.212502
journal journal Phys. Rev. Lett. volume 129, pages 212502 (year
2022)NoStop
[Gray et al.(2023)Gray,
Allmond, Xu, King,
Lubna, Crawford, Tripathi,
Crider, Grzywacz, Liddick,
Macchiavelli, Miyagi, Poves,
Andalib, Argo, Benetti,
Bhattacharya, Campbell, Carpenter, Chan, Chester, Christie, Clark, Cox, Doetsch, Dopfer, Duarte, Fallon, Frotscher, Gaballah, Harke, Heideman, Huegen, Holt, Jain, Kitamura, Kolos, Kondev, Laminack, Longfellow, Luitel, Madurga, Mahajan, Mogannam, Morse, Neupane, Nowicki, Ogunbeku, Ong, Porzio, Prokop, Rasco,
Ronning, Rubino, Ruland,
Rykaczewski, Schaedig, Seweryniak, Siegl, Singh, Stuchbery, Tabor, Tang, Wheeler, Winger, and Wood]gray2023
author author T. J. Gray, author J. M. Allmond,
author Z. Xu, author
T. T. King, author
R. S. Lubna, author
H. L. Crawford, author
V. Tripathi, author
B. P. Crider, author
R. Grzywacz, author
S. N. Liddick, author
A. O. Macchiavelli, author
T. Miyagi, author A. Poves, author A. Andalib, author E. Argo, author C. Benetti, author S. Bhattacharya, author C. M. Campbell, author M. P. Carpenter, author J. Chan,
author A. Chester, author J. Christie, author
B. R. Clark, author
I. Cox, author A. A. Doetsch, author J. Dopfer, author J. G. Duarte, author P. Fallon, author A. Frotscher,
author T. Gaballah, author J. T. Harke, author
J. Heideman, author
H. Huegen, author J. D. Holt, author R. Jain, author N. Kitamura, author K. Kolos,
author F. G. Kondev, author A. Laminack, author
B. Longfellow, author
S. Luitel, author M. Madurga, author R. Mahajan, author M. J. Mogannam, author C. Morse, author S. Neupane,
author A. Nowicki, author T. H. Ogunbeku, author
W.-J. Ong, author C. Porzio, author C. J. Prokop, author B. C. Rasco, author E. K. Ronning, author E. Rubino,
author T. J. Ruland, author K. P. Rykaczewski, author L. Schaedig, author
D. Seweryniak, author
K. Siegl, author M. Singh, author A. E. Stuchbery, author S. L. Tabor, author T. L. Tang, author T. Wheeler,
author J. A. Winger, and author J. L. Wood, title title Microsecond isomer at the n=20 island
of shape inversion observed at frib, 10.1103/PhysRevLett.130.242501 journal journal
Phys. Rev. Lett. volume 130, pages
242501 (year 2023)NoStop
[Madurga et al.(2023)Madurga, Christie, Xu, Grzywacz, Poves, King, Allmond, Chester, Cox, Farr, Fletcher, Heideman,
Hoskins, Laminack, Liddick, Neupane, Richard,
Shimizu, Shuai, Siegl,
Utsuno, Wagenknecht, and Yokoyama]madurga2023
author author M. Madurga, author J. M. Christie, author Z. Xu,
author R. Grzywacz, author A. Poves, author
T. King, author J. M. Allmond, author A. Chester, author I. Cox, author J. Farr, author I. Fletcher, author J. Heideman, author D. Hoskins, author A. Laminack, author S. Liddick, author S. Neupane, author A. L. Richard, author N. Shimizu, author P. Shuai,
author K. Siegl, author Y. Utsuno, author
P. Wagenknecht, and author
R. Yokoyama, title
title New isomeric transition in ^36Mg: Bridging
the N=20 and N=28 islands of inversion, 10.48550/arXiv.2301.12002 journal journal arXiv
e-prints , pages arXiv:2301.12002 (year
2023)NoStop
[Otsuka et al.(2020)Otsuka,
Gade, Sorlin, Suzuki, and Utsuno]otsuka2020
author author Takaharu Otsuka, author Alexandra Gade, author Olivier Sorlin, author Toshio Suzuki, and author Yutaka Utsuno, title title
Evolution of shell structure in exotic nuclei, 10.1103/RevModPhys.92.015002 journal journal
Rev. Mod. Phys. volume 92, pages
015002 (year 2020)NoStop
[National Nuclear Data Center()]nndc
author author National Nuclear Data
Center, @noop title Nudat database, howpublished <https://www.nndc.bnl.gov/nudat3/>, note accessed: 2023-11-29NoStop
[Kondo et al.(2023)Kondo,
Achouri, Falou, Atar,
Aumann, Baba, Boretzky,
Caesar, Calvet, Chae,
Chiga, Corsi, Delaunay,
Delbart, Deshayes, Dombrádi, Douma, Ekström,
Elekes, Forssén, Gašparić, Gheller, Gibelin, Gillibert, Hagen, Harakeh, Hirayama, Hoffman, Holl, Horvat, Horváth, Hwang, Isobe, Jiang, Kahlbow, Kalantar-Nayestanaki, Kawase, Kim,
Kisamori, Kobayashi, Körper, Koyama, Kuti, Lapoux, Lindberg, Marqués,
Masuoka, Mayer, Miki,
Murakami, Najafi, Nakamura,
Nakano, Nakatsuka, Nilsson,
Obertelli, Ogata, de Oliveira Santos, Orr, Otsu,
Otsuka, Ozaki, Panin,
Papenbrock, Paschalis, Revel,
Rossi, Saito, Saito,
Sasano, Sato, Satou,
Scheit, Schindler, Schrock,
Shikata, Shimizu, Shimizu,
Simon, Sohler, Sorlin,
Stuhl, Sun, Takeuchi,
Tanaka, Thoennessen, Törnqvist, Togano, Tomai, Tscheuschner, Tsubota, Tsunoda,
Uesaka, Utsuno, Vernon,
Wang, Yang, Yasuda,
Yoneda, and Yoshida]kondo2023
author author Y. Kondo, author N. L. Achouri,
author H. Al Falou, author L. Atar, author
T. Aumann, author H. Baba, author K. Boretzky, author C. Caesar,
author D. Calvet, author H. Chae, author
N. Chiga, author A. Corsi, author F. Delaunay, author A. Delbart, author Q. Deshayes, author Zs. Dombrádi, author C. A. Douma, author A. Ekström, author Z. Elekes,
author C. Forssén, author I. Gašparić, author J. M. Gheller, author
J. Gibelin, author A. Gillibert, author G. Hagen, author M. N. Harakeh, author A. Hirayama, author C. R. Hoffman, author M. Holl,
author A. Horvat, author Á. Horváth, author J. W. Hwang, author
T. Isobe, author W. G. Jiang, author J. Kahlbow, author N. Kalantar-Nayestanaki, author S. Kawase, author S. Kim, author K. Kisamori, author T. Kobayashi,
author D. Körper, author S. Koyama, author
I. Kuti, author V. Lapoux, author S. Lindberg, author F. M. Marqués, author S. Masuoka, author J. Mayer, author K. Miki, author T. Murakami, author M. Najafi,
author T. Nakamura, author K. Nakano, author
N. Nakatsuka, author
T. Nilsson, author A. Obertelli, author K. Ogata, author F. de Oliveira Santos, author N. A. Orr, author H. Otsu, author T. Otsuka, author T. Ozaki,
author V. Panin, author T. Papenbrock, author
S. Paschalis, author
A. Revel, author D. Rossi, author A. T. Saito, author T. Y. Saito, author M. Sasano,
author H. Sato, author
Y. Satou, author H. Scheit, author F. Schindler, author P. Schrock, author M. Shikata, author N. Shimizu, author Y. Shimizu, author H. Simon, author D. Sohler, author O. Sorlin, author L. Stuhl, author Z. H. Sun, author S. Takeuchi, author M. Tanaka,
author M. Thoennessen, author H. Törnqvist, author
Y. Togano, author T. Tomai, author J. Tscheuschner, author J. Tsubota, author N. Tsunoda, author T. Uesaka, author Y. Utsuno, author I. Vernon, author H. Wang, author Z. Yang, author M. Yasuda,
author K. Yoneda, and author S. Yoshida, title
title First observation of ^28O, 10.1038/s41586-023-06352-6 journal journal Nature volume 620, pages
965–970 (year 2023)NoStop
[Revel et al.(2020)Revel,
Sorlin, Marqués, Kondo,
Kahlbow, Nakamura, Orr,
Nowacki, Tostevin, Yuan,
Achouri, Al Falou, Atar,
Aumann, Baba, Boretzky,
Caesar, Calvet, Chae,
Chiga, Corsi, Crawford,
Delaunay, Delbart, Deshayes,
Dombrádi, Douma, Elekes,
Fallon, Gaššpari ćć, Gheller,
Gibelin, Gillibert, Harakeh,
He, Hirayama, Hoffman,
Holl, Horvat, Horváth,
Hwang, Isobe, Kalantar-Nayestanaki, Kawase, Kim,
Kisamori, Kobayashi, Körper, Koyama, Kuti, Lapoux, Lindberg, Masuoka, Mayer, Miki, Murakami, Najafi, Nakano, Nakatsuka, Nilsson, Obertelli, de Oliveira Santos,
Otsu, Ozaki, Panin,
Paschalis, Rossi, Saito,
Saito, Sasano, Sato,
Satou, Scheit, Schindler,
Schrock, Shikata, Shimizu,
Simon, Sohler, Stuhl,
Takeuchi, Tanaka, Thoennessen, Törnqvist, Togano,
Tomai, Tscheuschner, Tsubota,
Uesaka, Yang, Yasuda, and Yoneda]revel2020
author author A. Revel, author O. Sorlin,
author F. M. Marqués, author Y. Kondo, author
J. Kahlbow, author T. Nakamura, author N. A. Orr, author F. Nowacki, author J. A. Tostevin, author C. X. Yuan,
author N. L. Achouri, author H. Al Falou, author
L. Atar, author T. Aumann, author H. Baba, author K. Boretzky, author C. Caesar,
author D. Calvet, author H. Chae, author
N. Chiga, author A. Corsi, author H. L. Crawford, author F. Delaunay, author A. Delbart,
author Q. Deshayes, author Z. Dombrádi, author
C. A. Douma, author
Z. Elekes, author P. Fallon, author I. Gaššpari ćć, author
J.-M. Gheller, author
J. Gibelin, author A. Gillibert, author M. N. Harakeh, author W. He, author A. Hirayama,
author C. R. Hoffman, author M. Holl, author
A. Horvat, author Á. Horváth, author J. W. Hwang, author T. Isobe, author N. Kalantar-Nayestanaki, author S. Kawase, author S. Kim,
author K. Kisamori, author T. Kobayashi, author
D. Körper, author
S. Koyama, author I. Kuti, author V. Lapoux, author S. Lindberg,
author S. Masuoka, author J. Mayer, author
K. Miki, author T. Murakami, author M. Najafi, author K. Nakano, author N. Nakatsuka, author T. Nilsson, author A. Obertelli, author F. de Oliveira Santos, author H. Otsu, author T. Ozaki, author V. Panin,
author S. Paschalis, author D. Rossi, author
A. T. Saito, author
T. Saito, author M. Sasano, author H. Sato, author Y. Satou, author H. Scheit,
author F. Schindler, author P. Schrock, author
M. Shikata, author Y. Shimizu, author H. Simon, author D. Sohler, author L. Stuhl, author S. Takeuchi, author M. Tanaka, author M. Thoennessen, author H. Törnqvist, author Y. Togano, author T. Tomai, author J. Tscheuschner, author J. Tsubota, author T. Uesaka, author Z. Yang, author M. Yasuda, and author K. Yoneda (collaboration SAMURAI21 collaboration), title title Extending the southern shore of the
island of inversion to ^28F, 10.1103/PhysRevLett.124.152502 journal journal
Phys. Rev. Lett. volume 124, pages
152502 (year 2020)NoStop
[Gaudefroy et al.(2012)Gaudefroy, Mittig, Orr, Varet, Chartier, Roussel-Chomaz,
Ebran, Fernández-Domínguez,
Frémont, Gangnant, Gillibert, Grévy, Libin, Maslov, Paschalis, Pietras, Penionzhkevich, Spitaels, and Villari]Gaudefroy2012
author author L. Gaudefroy, author W. Mittig,
author N. A. Orr, author S. Varet, author
M. Chartier, author
P. Roussel-Chomaz, author
J. P. Ebran, author
B. Fernández-Domínguez, author G. Frémont, author
P. Gangnant, author
A. Gillibert, author
S. Grévy, author J. F. Libin, author V. A. Maslov, author S. Paschalis, author B. Pietras,
author Yu.-E. Penionzhkevich,
author C. Spitaels, and author A. C. C. Villari, title title Direct mass measurements of
^19B, ^22C, ^29F, ^31Ne,
^34Na and other light exotic nuclei, 10.1103/PhysRevLett.109.202503 journal journal
Phys. Rev. Lett. volume 109, pages
202503 (year 2012)NoStop
[Bagchi et al.(2020)Bagchi,
Kanungo, Tanaka, Geissel,
Doornenbal, Horiuchi, Hagen,
Suzuki, Tsunoda, Ahn,
Baba, Behr, Browne,
Chen, Cortés, Estradé,
Fukuda, Holl, Itahashi,
Iwasa, Jansen, Jiang,
Kaur, Macchiavelli, Matsumoto, Momiyama, Murray, Nakamura, Novario, Ong, Otsuka, Papenbrock, Paschalis,
Prochazka, Scheidenberger, Schrock, Shimizu, Steppenbeck,
Sakurai, Suzuki, Suzuki,
Takechi, Takeda, Takeuchi,
Taniuchi, Wimmer, and Yoshida]Bagchi2020
author author S. Bagchi, author R. Kanungo,
author Y. K. Tanaka, author H. Geissel, author
P. Doornenbal, author
W. Horiuchi, author
G. Hagen, author T. Suzuki, author N. Tsunoda, author D. S. Ahn, author H. Baba, author K. Behr, author F. Browne, author
S. Chen, author M. L. Cortés, author A. Estradé, author N. Fukuda, author M. Holl, author K. Itahashi, author N. Iwasa,
author G. R. Jansen, author W. G. Jiang, author
S. Kaur, author A. O. Macchiavelli, author S. Y. Matsumoto, author S. Momiyama, author I. Murray, author T. Nakamura, author S. J. Novario, author H. J. Ong, author T. Otsuka, author T. Papenbrock,
author S. Paschalis, author A. Prochazka, author
C. Scheidenberger, author
P. Schrock, author Y. Shimizu, author D. Steppenbeck, author H. Sakurai, author D. Suzuki, author H. Suzuki, author M. Takechi, author H. Takeda, author S. Takeuchi, author R. Taniuchi, author K. Wimmer, and author K. Yoshida, title title Two-neutron halo is unveiled in ^29F, 10.1103/PhysRevLett.124.222504 journal journal Phys. Rev. Lett. volume 124, pages 222504 (year 2020)NoStop
[Nakamura et al.(2009)Nakamura, Kobayashi, Kondo, Satou, Aoi, Baba, Deguchi,
Fukuda, Gibelin, Inabe,
Ishihara, Kameda, Kawada,
Kubo, Kusaka, Mengoni,
Motobayashi, Ohnishi, Ohtake,
Orr, Otsu, Otsuka,
Saito, Sakurai, Shimoura,
Sumikama, Takeda, Takeshita,
Takechi, Takeuchi, Tanaka,
Tanaka, Tanaka, Togano,
Utsuno, Yoneda, Yoshida, and Yoshida]nakamura2009
author author T. Nakamura, author N. Kobayashi,
author Y. Kondo, author Y. Satou, author
N. Aoi, author H. Baba, author S. Deguchi, author N. Fukuda,
author J. Gibelin, author N. Inabe, author
M. Ishihara, author
D. Kameda, author Y. Kawada, author T. Kubo, author K. Kusaka, author A. Mengoni,
author T. Motobayashi, author T. Ohnishi, author
M. Ohtake, author N. A. Orr, author H. Otsu, author T. Otsuka, author A. Saito,
author H. Sakurai, author S. Shimoura, author
T. Sumikama, author
H. Takeda, author E. Takeshita, author M. Takechi, author S. Takeuchi, author K. Tanaka, author K. N. Tanaka, author N. Tanaka, author Y. Togano,
author Y. Utsuno, author K. Yoneda, author
A. Yoshida, and author
K. Yoshida, title title Halo structure of the island of inversion nucleus
^31Ne, 10.1103/PhysRevLett.103.262501
journal journal Phys. Rev. Lett. volume 103, pages 262501 (year
2009)NoStop
[Caprio et al.(2013)Caprio,
Maris, and Vary]caprio2013
author author M. A. Caprio, author P. Maris, and author J. P. Vary, title title Emergence of rotational
bands in ab initio no-core configuration interaction calculations of light
nuclei, 10.1016/j.physletb.2012.12.064 journal journal Phys. Lett. B volume
719, pages 179 – 184 (year 2013)NoStop
[Wiringa et al.(2013)Wiringa, Pastore, Pieper, and Miller]wiringa2013
author author R. B. Wiringa, author S. Pastore,
author Steven C. Pieper, and author Gerald A. Miller, title title Charge-symmetry breaking
forces and isospin mixing in ^8be, 10.1103/PhysRevC.88.044333 journal journal Phys.
Rev. C volume 88, pages 044333
(year 2013)NoStop
[Dytrych et al.(2013)Dytrych, Launey, Draayer, Maris, Vary, Saule, Catalyurek, Sosonkina, Langr, and Caprio]dytrych2013
author author T. Dytrych, author K. D. Launey,
author J. P. Draayer, author P. Maris, author
J. P. Vary, author
E. Saule, author U. Catalyurek, author M. Sosonkina, author D. Langr, and author M. A. Caprio, title title
Collective modes in light nuclei from first principles, 10.1103/PhysRevLett.111.252501 journal journal
Phys. Rev. Lett. volume 111, pages
252501 (year 2013)NoStop
[Caprio et al.(2015)Caprio,
Maris, Vary, and Smith]caprio2015
author author M. A. Caprio, author P. Maris,
author J. P. Vary, and author R. Smith, title title Collective rotation from ab initio
theory, 10.1142/S0218301315410025 journal
journal Int. J. Mod. Phys. E volume
24, pages 1541002 (year 2015)NoStop
[Maris et al.(2015)Maris,
Caprio, and Vary]maris2015
author author P. Maris, author M. A. Caprio, and author J. P. Vary, title title Emergence of rotational
bands in ab initio no-core configuration interaction calculations of
the be isotopes, 10.1103/PhysRevC.91.014310 journal journal Phys. Rev. C volume
91, pages 014310 (year 2015)NoStop
[Dytrych et al.(2020)Dytrych, Launey, Draayer, Rowe, Wood, Rosensteel, Bahri, Langr, and Baker]dytrych2020
author author T. Dytrych, author K. D. Launey,
author J. P. Draayer, author D. J. Rowe, author
J. L. Wood, author
G. Rosensteel, author
C. Bahri, author D. Langr, and author R. B. Baker, title title
Physics of nuclei: Key role of an emergent symmetry, 10.1103/PhysRevLett.124.042501 journal journal
Phys. Rev. Lett. volume 124, pages
042501 (year 2020)NoStop
[Miyagi et al.(2020)Miyagi,
Stroberg, Holt, and Shimizu]miyagi2020
author author T. Miyagi, author S. R. Stroberg, author J. D. Holt,
and author N. Shimizu, title title Ab initio multishell
valence-space hamiltonians and the island of inversion, 10.1103/PhysRevC.102.034320 journal journal
Phys. Rev. C volume 102, pages
034320 (year 2020)NoStop
[Frosini et al.(2022)Frosini, Duguet, Ebran, Bally, Mongelli, Rodríguez,
Roth, and Somà]Frosini:2021sxj
author author Mikael Frosini, author Thomas Duguet, author Jean-Paul Ebran, author Benjamin Bally,
author Tobias Mongelli, author Tomás R. Rodríguez,
author Robert Roth, and author Vittorio Somà, title title Multi-reference many-body
perturbation theory for nuclei: II. Ab initio study of neon isotopes via PGCM
and IM-NCSM calculations, 10.1140/epja/s10050-022-00693-y journal journal
Eur. Phys. J. A volume 58, pages 63
(year 2022), http://arxiv.org/abs/2111.00797
arXiv:2111.00797 [nucl-th] NoStop
[Hagen et al.(2022)Hagen,
Novario, Sun, Papenbrock,
Jansen, Lietz, Duguet, and Tichai]hagen2022
author author G. Hagen, author S. J. Novario,
author Z. H. Sun, author T. Papenbrock, author
G. R. Jansen, author
J. G. Lietz, author
T. Duguet, and author
A. Tichai, title title Angular-momentum projection in coupled-cluster theory:
Structure of ^34Mg, 10.1103/PhysRevC.105.064311 journal journal
Phys. Rev. C volume 105, pages
064311 (year 2022)NoStop
[Sun et al.(2024)Sun,
Ekström, Forssén, Hagen, Jansen, and Papenbrock]sun2024
author author Z. H. Sun, author A. Ekström, author C. Forssén, author G. Hagen, author G. R. Jansen, and author T. Papenbrock, title title
Multiscale physics of atomic nuclei from first principles, 10.48550/arXiv.2404.00058 journal journal arXiv e-prints , pages arXiv:2404.00058 (year 2024)NoStop
[Nilsson(1955)]nilsson1955
author author S. G. Nilsson, title title Binding states
of individual nucleons in strongly deformed nuclei, http://publ.royalacademy.dk/books/75/441 journal journal K. Dan. Vidensk. Selsk. Mat. Fys. Medd. volume 29, pages no.16 (year
1955)NoStop
[Mayer and Jensen(1955)]mayer1955
author author M. G. Mayer and author J. H. D. Jensen, @noop title Elementary Theory of
Nuclear Shell Structure (publisher John Wiley & Sons, address New York, year 1955)NoStop
[Hagen et al.(2014)Hagen,
Papenbrock, Hjorth-Jensen, and Dean]hagen2014
author author G. Hagen, author T. Papenbrock,
author M. Hjorth-Jensen, and author D. J. Dean, title title Coupled-cluster computations of atomic
nuclei, 10.1088/0034-4885/77/9/096302 journal journal Rep. Prog. Phys. volume 77, pages 096302 (year
2014)NoStop
[Hagen et al.(2016)Hagen,
Jansen, and Papenbrock]hagen2016b
author author G. Hagen, author G. R. Jansen, and author T. Papenbrock, title title Structure of
^78Ni from first-principles computations, 10.1103/PhysRevLett.117.172501 journal journal
Phys. Rev. Lett. volume 117, pages
172501 (year 2016)NoStop
[Morris et al.(2018)Morris,
Simonis, Stroberg, Stumpf,
Hagen, Holt, Jansen,
Papenbrock, Roth, and Schwenk]morris2018
author author T. D. Morris, author J. Simonis,
author S. R. Stroberg, author C. Stumpf, author
G. Hagen, author J. D. Holt, author G. R. Jansen, author T. Papenbrock, author R. Roth, and author A. Schwenk, title title Structure of the lightest
tin isotopes, 10.1103/PhysRevLett.120.152503 journal journal Phys. Rev. Lett. volume 120, pages 152503 (year
2018)NoStop
[Hu et al.(2022)Hu,
Jiang, Miyagi, Sun,
Ekström, Forssén, Hagen, Holt, Papenbrock, Stroberg, and Vernon]hu2022
author author Baishan Hu, author Weiguang Jiang,
author Takayuki Miyagi, author Zhonghao Sun, author
Andreas Ekström, author
Christian Forssén, author
Gaute Hagen, author
Jason D. Holt, author
Thomas Papenbrock, author
S. Ragnar Stroberg, and author Ian Vernon, title
title Ab initio predictions link the neutron skin of
^208Pb to nuclear forces, 10.1038/s41567-022-01715-8 journal journal
Nature Physics volume 18, pages
1196–1200 (year 2022)NoStop
[Hu et al.(2024)Hu,
Sun, Hagen, and Papenbrock]hu2024
author author B. S. Hu, author Z. H. Sun,
author G. Hagen, and author T. Papenbrock, title
title Ab initio computations of strongly deformed
nuclei near ^80Zr, 10.1103/PhysRevC.110.L011302 journal journal
Phys. Rev. C volume 110, pages
L011302 (year 2024)NoStop
[Hu et al.(2024)Hu,
Sun, Hagen, Jansen, and Papenbrock]hu2024b
author author B. S. Hu, author Z. H. Sun,
author G. Hagen, author G. R. Jansen, and author T. Papenbrock, title title Ab initio computations from ^78Ni
towards ^70Ca along neutron number N=50, 10.48550/arXiv.2408.07856 journal journal arXiv
e-prints , pages arXiv:2408.07856 (year
2024)NoStop
[Hagen et al.(2007)Hagen,
Papenbrock, Dean, Schwenk,
Nogga, Włoch, and Piecuch]hagen2007a
author author G. Hagen, author T. Papenbrock,
author D. J. Dean, author A. Schwenk, author
A. Nogga, author M. Włoch, and author P. Piecuch, title title Coupled-cluster theory for three-body Hamiltonians, 10.1103/PhysRevC.76.034302 journal journal Phys. Rev. C volume 76, pages 034302 (year 2007)NoStop
[Roth et al.(2012)Roth,
Binder, Vobig, Calci,
Langhammer, and Navrátil]roth2012
author author Robert Roth, author Sven Binder,
author Klaus Vobig, author Angelo Calci, author
Joachim Langhammer, and author Petr Navrátil, title title Medium-Mass Nuclei with Normal-Ordered
Chiral NN+3N Interactions, 10.1103/PhysRevLett.109.052501 journal journal
Phys. Rev. Lett. volume 109, pages
052501 (year 2012)NoStop
[Frosini et al.(2021)Frosini, Duguet, Bally, Beaujeault-Taudière, Ebran, and Somà]Frosini:2021tuj
author author M. Frosini, author T. Duguet,
author B. Bally, author Y. Beaujeault-Taudière, author J. P. Ebran, and author V. Somà, title
title In-medium k-body reduction of n-body
operators: A flexible symmetry-conserving approach based on the sole one-body
density matrix, 10.1140/epja/s10050-021-00458-z
journal journal Eur. Phys. J. A volume 57, pages 151 (year
2021)NoStop
[Bender et al.(2003)Bender,
Heenen, and Reinhard]bender2003
author author Michael Bender, author Paul-Henri Heenen, and author Paul-Gerhard Reinhard, title title
Self-consistent mean-field models for nuclear structure, 10.1103/RevModPhys.75.121 journal journal Rev.
Mod. Phys. volume 75, pages 121–180
(year 2003)NoStop
[Kümmel et al.(1978)Kümmel, Lührmann, and Zabolitzky]kuemmel1978
author author H. Kümmel, author K. H. Lührmann, and author J. G. Zabolitzky, title title
Many-fermion theory in expS- (or coupled cluster) form, 10.1016/0370-1573(78)90081-9 journal journal Phys. Rep. volume 36, pages
1 – 63 (year 1978)NoStop
[Bartlett and Musiał(2007)]bartlett2007
author author Rodney J. Bartlett and author Monika Musiał, title title
Coupled-cluster theory in quantum chemistry, 10.1103/RevModPhys.79.291 journal journal Rev.
Mod. Phys. volume 79, pages 291–352
(year 2007)NoStop
[Shavitt and Bartlett(2009)]shavittbartlett2009
author author I. Shavitt and author R. J. Bartlett, @noop title Many-body Methods in
Chemistry and Physics (publisher Cambridge University
Press, address Cambridge UK, year
2009)NoStop
[Coester and Kümmel(1960)]coester1960
author author F. Coester and author H. Kümmel, title title Short-range
correlations in nuclear wave functions, 10.1016/0029-5582(60)90140-1 journal journal
Nuclear Physics volume 17, pages 477
– 485 (year 1960)NoStop
[Qiu et al.(2017)Qiu,
Henderson, Zhao, and Scuseria]qiu2017
author author Yiheng Qiu, author Thomas M. Henderson, author Jinmo Zhao,
and author Gustavo E. Scuseria, title title Projected
coupled cluster theory, 10.1063/1.4991020 journal journal J. Chem. Phys. volume
147, pages 064111 (year 2017)NoStop
[Bally and Duguet(2018)]bally2018
author author B. Bally and author T. Duguet, title title Norm overlap between
many-body states: Uncorrelated overlap between arbitrary bogoliubov product
states, 10.1103/PhysRevC.97.024304 journal
journal Phys. Rev. C volume 97, pages 024304 (year 2018)NoStop
[Arponen(1982)]arponen1982
author author J. Arponen, title title The method of
stationary cluster amplitudes and the phase transition in the lipkin
pseudospin model, 10.1088/0305-4616/8/8/004 journal journal Journal of Physics G: Nuclear Physics volume 8, pages L129 (year
1982)NoStop
[Arponen(1983)]arponen1983
author author Jouko Arponen, title title Variational
principles and linked-cluster exp S expansions for static and dynamic
many-body problems, 10.1016/0003-4916(83)90284-1
journal journal Ann. Phys. volume 151, pages 311 – 382 (year
1983)NoStop
[Varshalovich et al.(1988)Varshalovich, Moskalev, and Khersonskii]varshalovich1988
author author D. A. Varshalovich, author A. N. Moskalev, and author V. K. Khersonskii, @noop title Quantum theory of
angular momentum (publisher World Scientific, address Singapore, year 1988)NoStop
[Thouless(1960)]thouless1960
author author D. J. Thouless, title title Stability
conditions and nuclear rotations in the Hartree-Fock theory, 10.1016/0029-5582(60)90048-1 journal journal Nuclear Physics volume 21, pages 225–232 (year 1960)NoStop
[Rodriguez-Laguna et al.(2020)Rodriguez-Laguna, Robledo, and Dukelsky]robledo2020
author author Javier Rodriguez-Laguna, author Luis Miguel Robledo, and author Jorge Dukelsky, title title
Efficient computation of matrix elements of generic slater determinants, 10.1103/PhysRevA.101.012105 journal journal Phys. Rev. A volume 101, pages 012105 (year 2020)NoStop
[Ekström et al.(2013)Ekström, Baardsen, Forssén,
Hagen, Hjorth-Jensen, Jansen,
Machleidt, Nazarewicz, Papenbrock, Sarich, and Wild]ekstrom2013
author author A. Ekström, author G. Baardsen,
author C. Forssén, author G. Hagen, author
M. Hjorth-Jensen, author
G. R. Jansen, author
R. Machleidt, author
W. Nazarewicz, author
T. Papenbrock, author
J. Sarich, and author
S. M. Wild, title title Optimized chiral nucleon-nucleon interaction at
next-to-next-to-leading order, 10.1103/PhysRevLett.110.192502 journal journal
Phys. Rev. Lett. volume 110, pages
192502 (year 2013)NoStop
[ntc()]ntcl
@noop title NTCL – Nuclear Tensor
Contraction Library, howpublished
<https://gitlab.com/ntcl/ntcl>, note accessed:
2024-05-21NoStop
[fro()]frontier
@noop title The Frontier User Guide, howpublished
<https://docs.olcf.ornl.gov/systems/frontier_user_guide.html>, note accessed: 2024-05-21NoStop
[bla()]blas
@noop title BLAS (Basic Linear Algebra
Subprograms), howpublished
<https://www.netlib.org/blas/>, note accessed:
2024-05-21NoStop
[Hebeler et al.(2011)Hebeler, Bogner, Furnstahl, Nogga, and Schwenk]hebeler2011
author author K. Hebeler, author S. K. Bogner,
author R. J. Furnstahl, author A. Nogga, and author
A. Schwenk, title title Improved nuclear matter calculations from chiral
low-momentum interactions, 10.1103/PhysRevC.83.031301
journal journal Phys. Rev. C volume 83, pages 031301 (year
2011)NoStop
[Hagen et al.(2016)Hagen, Hjorth-Jensen, Jansen, and Papenbrock]hagen2016
author author G. Hagen, author M. Hjorth-Jensen, author G. R. Jansen, and author T. Papenbrock, title title Emergent
properties of nuclei from ab initio coupled-cluster calculations, 10.1088/0031-8949/91/6/063006 journal journal Phys. Scr. volume 91, pages
063006 (year 2016)NoStop
[Entem and Machleidt(2003)]entem2003
author author D. R. Entem and author R. Machleidt, title title Accurate
charge-dependent nucleon-nucleon potential at fourth order of chiral
perturbation theory, 10.1103/PhysRevC.68.041001
journal journal Phys. Rev. C volume 68, pages 041001 (year
2003)NoStop
[Bogner et al.(2007)Bogner,
Furnstahl, and Perry]bogner2007
author author S. K. Bogner, author R. J. Furnstahl, and author R. J. Perry, title title Similarity
renormalization group for nucleon-nucleon interactions, 10.1103/PhysRevC.75.061001 journal journal Phys.
Rev. C volume 75, pages 061001
(year 2007)NoStop
[Epelbaum et al.(2002)Epelbaum, Nogga, Glöckle, Kamada, Meißner, and Witała]epelbaum2002
author author E. Epelbaum, author A. Nogga,
author W. Glöckle, author H. Kamada, author
Ulf-G. Meißner, and author H. Witała, title
title Three-nucleon forces from chiral effective field
theory, 10.1103/PhysRevC.66.064001 journal
journal Phys. Rev. C volume 66, pages 064001 (year 2002)NoStop
[Lin et al.(2024)Lin,
Zhou, Yao, and Hergert]lin2024
author author Wei Lin, author Enfu Zhou,
author Jiangming Yao, and author Heiko Hergert, title title Quantum-Number Projected
Generator Coordinate Method for ^21Ne with a Chiral
Two-Nucleon-Plus-Three-Nucleon Interaction, 10.3390/sym16040409 journal journal Symmetry volume 16, pages 409 (year
2024)NoStop
[Bally and Bender(2021)]bally2021
author author Benjamin Bally and author Michael Bender, title title
Projection on particle number and angular momentum: Example of triaxial
bogoliubov quasiparticle states, 10.1103/PhysRevC.103.024315 journal journal
Phys. Rev. C volume 103, pages
024315 (year 2021)NoStop
[Bengtsson and Ragnarsson(1985)]BENGTSSON198514
author author Tord Bengtsson and author Ingemar Ragnarsson, title title Rotational
bands and particle-hole excitations at very high spin, https://doi.org/10.1016/0375-9474(85)90541-X journal journal Nuclear Physics A volume 436, pages 14–82 (year 1985)NoStop
[Sahoo et al.(2023)Sahoo,
Srivastava, and Suzuki]sahoo2023
author author Subhrajit Sahoo, author Praveen C. Srivastava, and author Toshio Suzuki, title title
Study of structure and radii for ^20-31Na isotopes using microscopic
interactions, 10.1016/j.nuclphysa.2023.122618
journal journal Nucl. Phys. A volume 1032, pages 122618 (year
2023)NoStop
[Warburton and Brown(1992)]warburton1992
author author E. K. Warburton and author B. A. Brown, title title Effective
interactions for the 0p1s0d nuclear shell-model space, 10.1103/PhysRevC.46.923 journal journal Phys.
Rev. C volume 46, pages 923–944
(year 1992)NoStop
[ens(2023)]ensdf
10.18139/NNDC.ENSDF/1845010 title
Evaluated nuclear structure data file (ensdf), (year
2023)NoStop
|
http://arxiv.org/abs/2409.02916v1 | 20240904175338 | Pseudospectral method for solving PDEs using Matrix Product States | [
"Jorge Gidi",
"Paula García-Molina",
"Luca Tagliacozzo",
"Juan José García-Ripoll"
] | quant-ph | [
"quant-ph",
"cs.NA",
"math.NA"
] |
[email protected]
Institute of Fundamental Physics IFF-CSIC, Calle Serrano 113b, Madrid 28006, Spain
Millennium Institute for Research in Optics and Departamento de Física, Facultad de Ciencias Físicas y Matemáticas, Universidad de Concepción, Casilla 160-C, Concepción, Chile
[email protected]
Institute of Fundamental Physics IFF-CSIC, Calle Serrano 113b, Madrid 28006, Spain
Institute of Fundamental Physics IFF-CSIC, Calle Serrano 113b, Madrid 28006, Spain
Institute of Fundamental Physics IFF-CSIC, Calle Serrano 113b, Madrid 28006, Spain
§ ABSTRACT
This research focuses on solving time-dependent partial differential equations (PDEs), in particular the time-dependent Schrödinger equation, using matrix product states (MPS). We propose an extension of Hermite Distributed Approximating Functionals (HDAF) to MPS, a highly accurate pseudospectral method for approximating functions of derivatives. Integrating HDAF into an MPS finite precision algebra, we test four types of quantum-inspired algorithms for time evolution: explicit Runge-Kutta methods, Crank-Nicolson method, explicitly restarted Arnoli iteration and split-step.
The benchmark problem is the expansion of a particle in a quantum quench, characterized by a rapid increase in space requirements, where HDAF surpasses traditional finite difference methods in accuracy with a comparable cost.
Moreover, the efficient HDAF approximation to the free propagator avoids the need for Fourier transforms in split-step methods, significantly enhancing their performance with an improved balance in cost and accuracy.
Both approaches exhibit similar error scaling and run times compared to FFT vector methods; however, MPS offer an exponential advantage in memory, overcoming vector limitations to enable larger discretizations and expansions. Finally, the MPS HDAF split-step method successfully reproduces the physical behavior of a particle expansion in a double-well potential, demonstrating viability for actual research scenarios.
Pseudospectral method for solving PDEs using Matrix Product States
Juan José García-Ripoll
Received ????; accepted ????
==================================================================
§ PRIVATE: MAIN MESSAGES – TODO RM
What messages do we plan to deliver?
* MPS can be more efficient than standard arrays to find the time evolution in practical, large physical problems
* We propose the HDAF as an efficient and precise tool to approximate functions of derivatives in the context of MPS-MPO
* The free evolution operator is efficiently represented in the coordinate basis using the HDAF-MPO formalism
* We can implement split-step evolution without Fourier transforms and achieve subexponential time scaling.
* We demonstrate the convergence of the method competitive to vectors in particle expansion problems.
* In the absence of chirping, our results are favorable to MPS in contrast to vectors using FFT.
§ INTRODUCTION
Solving time-dependent partial differential equations (PDEs) is crucial across most fields in science and engineering.
In the quantum domain, the challenge is additionally plagued by exponential costs <cit.>, both in the number of components of the system and in the unbounded domain of certain problems. An example of the latter is a particle expansion in a potential well <cit.>.
The simulation of large quantum systems rapidly becomes untractable by traditional computational approaches due to their exponential scaling in complexity. Quantum computers have been posed as a promising tool to solve PDEs more efficiently <cit.>, making use of the exponential compression that the amplitude encoding provides <cit.> and tools such as the Quantum Fourier Transform (QFT) <cit.>. However, there are still no scalable and fault-tolerant quantum computers where those algorithms may be applied <cit.>, and it is also uncertain if quantum computers are required, especially for bandwidth-limited functions with limited entanglement <cit.>.
A current challenge is to develop alternative, quantum-inspired algorithms that profit from some of the exponential compressions available in quantum algorithms and encodings while enabling the execution in classical computers.
This challenge involves developing three tools: (i) an efficient encoding of the PDE solution, (ii) a corresponding encoding of the PDE operator itself, and (iii) an algorithm for time evolution.
In this work, we address these three objectives in novel ways.
We use the representation of matrix product states (MPS) for bandwidth-limited functions <cit.>, also known in the mathematical community as quantized tensor trains (QTT) <cit.>. Within this formulation, we develop an innovative encoding of differential operators, PDEs, and free-evolution propagators based on Hermite Distributed Approximating Functionals (HDAF) <cit.>, to yield accurate matrix product operators (MPOs) with a low bond dimension. We present three families of time evolution algorithms using these operators: Global explicit and implicit methods, Arnoldi-based methods, and split-step methods. These new techniques are benchmarked against each other and alternative quantum-inspired methods based on finite differences <cit.>. They are also compared to state-of-the-art spectral split-step methods in the standard vector representation <cit.>.
As a benchmark, we study the expansion of a particle in a potential well. This computationally demanding scenario is highly relevant in optomechanics research <cit.>. As the domain size increases dramatically, conventional computational methods suffer considerable strain. In this context, the qubit encoding in MPS/QTT may provide exponential data compression. However, the particle's acceleration induces chirping of the wavefunction, potentially increasing the bond dimension required beyond any computational benefit. For the limited cases where this problem can be analytically solved <cit.>, it constitutes an effective testbed to stress and validate numerical solvers.
When correctly tuned, the HDAF MPOs for differential operators are much more accurate than those based on finite differences and have a comparable cost. Also, the time evolution methods built upon these operators inherit an improvement in accuracy and efficiency.
Our best-performing time evolution method is the HDAF split-step, which leverages the efficient HDAF representation of the free propagator on a coordinate basis to avoid using Fourier Transforms.
For the particle expansion problem, quantum-inspired methods converge with subexponential time scaling, which is competitive with a vector implementation. Moreover, in the absence of chirping, our results are favorable to the HDAF split-step method using MPS, in contrast to vectors using the Fast Fourier Transform (FFT).
Quantum-inspired methods have been used to tackle quantum numerical analysis problems before, such as static <cit.> and time-dependent PDEs <cit.>, and they have even permeated to other fields, such as kinetic plasma simulation <cit.>. Time evolution problems usually rely on Fourier techniques with Trotter expansions <cit.> or Chebyshev propagation schemes <cit.>. As an alternative, this work extends the HDAF approach to MPS/QTT, accurately and efficiently encoding differential operators, such as arbitrary functions of derivatives. Applied to the free-propagator, it enables Trotter expansions without Fourier transforms.
The work is structured as follows.
Section <ref> presents the expansion problems to validate the numerical methods.
Section <ref> introduces the MPS finite-precision algebra framework for quantum-inspired numerical algorithms and the HDAF machinery to approximate derivatives and functions of derivatives as MPOs, including metaheuristics to tune these approximations.
Section <ref> reviews the quantum time evolution problem and presents the numerical methods to address it. These techniques are contrasted via a one-step study on the one-dimensional quantum quench problem to determine the most convenient option.
Then, Section <ref> examines the best-performing technique for a long-time evolution on the complete range of expansion for the harmonic and non-harmonic problems.
Finally, Section <ref> summarizes the conclusions of this study.
§ PARTICLE EXPANSION
A quantum quench is a fundamental process where a system is driven out of equilibrium by a sudden change in its Hamiltonian <cit.>. In this section, we introduce the problem of particle expansion due to the sudden relaxation of an initial harmonic potential.
The particle expansion process presents several characteristics relevant to our study. First, it is a problem of interest in many areas, including many-body physics <cit.> and quantum optomechanics <cit.>. Second, it stresses traditional and MPS-based numerical methods. Vector representations become too expensive for large expansions as they require the storage of a huge spatial domain. This makes the problem an interesting playground for MPS simulation. Nonetheless, the particle acceleration induces a chirping of the wavefunction, thus incrementing the bond dimension of the solution and putting a strain on MPS-based simulations as well. A third feature of this process is that when the quench varies from one harmonic potential to another, a closed-form analytical solution is known <cit.>. This is a critical feature allowing us to benchmark not only the speed but also the accuracy of our methods.
The Schrödinger equation describes the evolution of a particle's wavefunction
i∂_t ψ(x,t) = (-ħ^2/2m∂_x^2 + V(x,t))ψ(x,t).
The election of the potential V(x,t) determines the physical behavior of the system. The benchmark problem we introduce is the sudden change of a harmonic potential from frequency ω_0 to ω_H at time t=0,
V(x,t) =
1/2ω_0^2x^2, t ≤ 0,
1/2ω_H^2x^2, t> 0.
Assuming the wavefunction at time t = 0 is relaxed to the ground state of the previous Hamiltonian,
ψ(x, t=0) = (ω_0/π)^1/4exp(1/2ω_0x^2),
its evolution is prescribed according to
ψ(x,t) = (ω(t)/π)^1/4exp(-[ω(t)/2 + iβ(t)] x^2),
ω(t) = ω_H(ω_H/ω_0cos^2(ω_H t)+ω_0/ω_Hsin^2(ω_H t))^-1,
β(t) = ω(t)/4(ω_H/ω_0-ω_0/ω_H)sin(2ω_H t).
The solution is a complex Gaussian with width σ(t)=1/√(ω(t)), modulated in time with period π/ω_H. For ω_H < ω_0, the system undergoes an expansion during the first half of the period. The smallest width is σ_min = 1 / √(ω_0) at time t=0, and the largest width σ_max = √(ω_0) / ω_H at time t = 0.5π / ω_H.
The expansion is quantified by σ_max / σ_min = ω_0 / ω_H. That is, the frequency ratio ω_0/ω_H is also the expansion ratio, dictating the total amplification of the wavefunction's spatial extent.
A larger expansion ratio requires more points to represent the solution accurately, challenging common computational approaches.
Another problem featuring similar expansion dynamics occurs when the initially harmonic potential changes to a wider trap with a non-harmonic component. This scenario holds high interest for experimental settings in optomechanics research <cit.>. However,
analytical solutions are not usually known, while the intricacies that complicate numerical methods still hold.
One such case happens for a double-well potential,
V(x,t) =
1/2ω_0^2x^2, t ≤ 0,
1/2ω_H^2 x^2 + u exp(-x^2/2σ^2), t> 0.
Qualitatively, the harmonic expansion behavior dominates the wavefunction's evolution for a sufficiently small value of u, with the Gaussian bump at x=0 acting as a perturbation. This allows us to use the results of the harmonic quench analysis as a basis for the study of the double-well potential. The Gaussian component of the potential, with u > 0, is expected to induce a symmetric separation of the expanding particle. This will deviate the wavefunction's evolution from the Gaussian form (<ref>) into a state with a two-peaked probability density. We choose this problem as a test to account for the feasibility of using the proposed numerical methods against a setting of actual research interest.
§ MPS ENCODING AND DIFFERENTIAL OPERATORS
MPS originally arose in the domain of physics to study quantum many-body problems. Dolgov rediscovered this formalism in the field of applied mathematics under the name of quantized tensor trains (QTTs) <cit.>, a subclass of the broader class of tensor trains (TTs) <cit.>. A continuous alternative class of TTs is the functional tensor trains (FTTs) <cit.>, which hold a network of univariate matrix-valued functions instead of rank-three tensors.
The exponential memory compression of MPS/QTT can bypass the curse of dimensionality in the representation of functions. This motivated the development of similar encodings from a quantum-inspired perspective <cit.>. Indeed, MPS/QTT constitute efficient ansätze for representing functions with fastly decaying Fourier coefficients <cit.>.
MPS and TTs have been successfully applied to a variety of numerical analysis problems such as high-dimensional nonlinear PDEs <cit.>, the Hamilton Jacobi Bellman equations <cit.>, the Schrödinger equation <cit.>, and stochastic problems <cit.>. Combining Fourier techniques with Trotter expansions <cit.>, or Chebyshev propagation schemes <cit.> allows for quantum dynamics simulations. Other approaches rely on one-step implicit time integration using an ALS-type solver or global space-time formulation to solve multi-dimensional parabolic problems <cit.>. Quantum-inspired proposals are successful at solving different PDEs, such as Schrödinger equations <cit.>, turbulence problems <cit.>, Hamiltonian PDEs <cit.>, and the Vlasov-Poisson system <cit.>.
Combining this MPS-based representation of functions and the analogous encoding of operators as matrix product operators (MPO), along with a basic set of operations and efficient truncation algorithms, leads to a finite precision algebra <cit.> that operates similarly to standard matrix-vector operations. This algebra is the basis for developing time evolution quantum-inspired algorithms in section <ref>.
This section presents an extension of the Hermite Distributed Approximating Functionals (HDAF) to reconstruct functions of differential operators within this finite precision MPS-MPO framework. The HDAF performs this reconstruction as a linear combination of Hermite polynomials weighted by a Gaussian filter, approximating these operators with tunable pseudospectral precision at a limited cost. The section also covers a general review of the HDAF and the metaheuristics behind the use of this technique.
§.§ Quantum-inspired numerical analysis
The amplitude encoding of functions <cit.> represents a function f(x) with x∈ [a,b) in an n-qubit quantum register as a normalized quantum state
|f^(n)⟩ = 1/𝒩_f^1/2∑_s=0^2^n-1 f(x_s^(n))|s⟩, x_s^(n) = a + sΔ x ^(n),
where 𝒩_f is the normalization constant and the index s={ 0,…, 2^n} labels each coordinate x_s to its corresponding quantum state |s⟩ <cit.>. The amplitude encoding may be exponentially compressed in a quantum register by mapping the indices s to the states of n qubits. This leads to a binary encoding of the coordinates s=∑_i=1^n2^n-is_i, with s_i = { 0, 1 }.
This function representation in a quantum register creates a many-body wavefunction that, for bandwidth-limited functions, admits an efficient MPS representation <cit.>
|ψ⟩ =∑_{ s_i }∑_{α_i } (A_α_1^s_1A_α_1,α_2^s_2… A_α_N-1^s_N)|s_1⟩
⊗|s_2⟩⊗ ... ⊗|s_N⟩,
where each qubit is mapped to a site of the MPS. The physical indices s_i correspond to the values of the energy levels of the qubits, and α_i are the bond indices, where α_i = {1,…,χ_i} with χ_i the bond dimension.
Given this efficient representation of functions, a relevant challenge is to find a comparable representation for operators that act on these functions, from PDE operators that describe the action of derivatives and potentials to evolution operators that model the dynamics of the quantum state.
Potentials are diagonal operators and can be derived from the MPS representation of their corresponding functions either exactly or via interpolation techniques like the Chebyshev approximation <cit.> or the TT-cross interpolation <cit.>. Thus, the real challenge arises from
finding suitable, efficient representations of derivatives and functions thereof.
The finite difference method is one of the most widespread techniques for approximating derivatives. This method uses the order p Taylor expansion of the function, with an error O(Δ x^m), where m depends on the terms combined for the approximation. Thus, the grid size limits the accuracy of the finite difference method. The most common implementations are the centered finite difference formulas
∂ f(x)/∂ x = f(x + Δ x) - f(x - Δ x)/2 Δ x + O(Δ x^2),
∂^2 f(x)/∂ x^2 = f(x + Δ x) - 2 f(x) + f(x - Δ x)/Δ x^2 + O(Δ x^2),
whose truncation error scales quadratically with the discretization. The weak noise suppression of the centered finite difference formula can be enhanced to construct smooth noise-robust differentiators <cit.>.
The finite difference MPO uses a linear combination of displacement operators Σ^±<cit.>,
|∂_xf^(n)⟩ ≃1/2Δx(Σ̂^+-Σ̂^-)|f^(n)⟩ + O(Δ x^2),
|∂^2_xf^(n)⟩ ≃1/Δx^2(Σ̂^+-2𝕀+Σ̂^-)|f^(n)⟩ + O(Δ x^2).
This MPO representation of the finite difference operators has a fixed bond dimension χ=3 independent of the number of sites.
In addition to the truncation error, the round-off error affects the finite-difference scheme. The trade-off of truncation and round-off error determines the optimum step size Δ x since the truncation error decreases with Δ x, but still, values that are too small will result in numerical errors due to round-off. Let us consider the second-order derivative approximation since this appears in the propagator of the time evolution.
The round-off error occurs when the difference between two consecutive points |f(x)-f(x±Δ x)| of the discretized function is of the order of the machine precision δ. This error is proportional to δ/Δ x^2 and can be amplified by the small denominator Δ x^2. Increasing the step size while maintaining the number of points for the discretization can correct this error.
§.§ Hermite Distributed Approximating Functionals
This work aims to overcome finite difference formulas' precision and speed limitations. In this regard, it is well known that spectral methods, such as Fourier techniques, can provide exponential speedup and precision guarantees for sufficiently smooth functions. Such methods have been derived in the domain of MPS/QTT methods <cit.> with some ad-hoc heuristics to improve the creation of operators. Hermite Distributed Approximating Functionals (HDAF) give a powerful yet lesser-known numerical analysis technique. In the following, we will show how these methods work and how they can be adapted to engineer MPOs of arbitrary differential operators with relatively low costs. Our work differs from previous studies where they have been applied in matrix form to each site of a tensor train <cit.>.
The Distributed Approximating Functionals (DAF) are well-tempered approximations to the Dirac delta distribution.
The first DAF, developed before the class name was coined, was the Hermite DAF <cit.>,
δ_M(x; σ) = exp(-x^2/2σ^2)/√(2π)σ∑_m=0^M/2(-1/4)^mH_2m(x/√(2)σ)/m!,
where H_n(x) is the n-th Hermite polynomial. It has two free parameters: The even integer M and the positive real σ are the order of the highest polynomial and the width of the approximation to the delta distribution, respectively.
The kernel defined in Eq. (<ref>) is a nascent delta function that operates as the identity for polynomial functions of degree M or lower,
f(x) ≈∫ dx' δ_M(x - x'; σ) f(x'),
approaching the Dirac distribution in the limit σ/M → 0. However, unlike the exact delta distribution, δ_M(x; σ) is generally a bandwidth-limited, infinitely smooth function amenable to quadrature methods and differentiation.
The method's well-tempered property arises from the absence of special points in the reconstruction. Exact reproduction at grid points is not required; therefore, it does not constitute an interpolation scheme.
Moreover, a fundamental property of Eq. (<ref>) is that the approximation converges uniformly to f(x) <cit.>, and the approximation error usually resembles the function f(x) albeit several orders of magnitude smaller <cit.>. This implies that the error is smaller when the function approaches zero, which is relevant and desirable for our wavefunctions applications.
Equation (<ref>) is customarily discretized on a uniform grid with spacing Δ x, using midpoint integration to render it as a matrix-vector product, f(x_i) = ∑_jK_ijf(x_j), where K is a symmetric Toeplitz matrix with components
K_ij = Δ x δ_M(Δ x |i - j|; σ).
Since δ_M(x; σ) has an exponentially decaying envelope, two essential properties arise: (i) the reconstruction matrix K can be made highly sparse and concentrated around its main diagonal, with the number of diagonals controlled by σ / Δ x, and (ii) a minimal number of diagonals (i.e. quadrature nodes) are required to accurately discretize the integral in Eq. (<ref>). The composite midpoint rule converges especially fast for periodic or peaked functions with vanishing derivatives at the integration limits <cit.>. Moreover, in this context, the midpoint rule surpasses the accuracy of higher-order Newton-Cotes quadrature rules using the same number of grid points <cit.>.
The MPO corresponding to the K matrix can be constructed as a weighted combination of the displacement operators Σ^± as
K̂ = Δ x δ_M(0; σ) 𝕀
+ ∑_i=1^2^n - 1Δ x δ_M(iΔ x; σ) (Σ̂^+i + Σ̂^-i),
where the symmetry of δ_M has been used. While in principle the sum ranges over the whole grid, in practice, one only needs to sum until δ_M has vanished according to a prescribed tolerance, as detailed on Sec <ref>.
§.§ HDAF differentiation
The HDAF formalism opens a path to estimate a function's derivative of any order, as well as functions D[∂/∂ x] of such derivatives. From Eq. (<ref>) it follows that
D[∂/∂ x] f(x) ≈∫ dx' D[∂/∂ x] δ_M(x - x') f(x').
An analytical expression for D[∂/∂ x] δ_M(x - x') is usually easy to find. The typical procedure involves using the Rodrigues formula for the Hermite polynomials,
H_n(x) = (-1)^nexp(x^2)∂^n/∂ x^nexp(-x^2),
to rewrite Eq. (<ref>) as
δ_M(x; σ) = 1/√(2π)σ∑_m=0^M/21/m!(-σ^2/4)^m
×∂^2m/∂ x^2mexp(-x^2/2σ^2).
Then, the differential operator is applied over δ_M(x - x'; σ) acting on the exponential, and the Rodrigues formula (<ref>) is used to recover an expression in terms of Hermite polynomials, without explicit derivatives. For instance, the l-th derivative leads to
δ_M^(l)(x; σ) = (-1/√(2)σ)^lexp(-x^2/2σ^2)/√(2π)σ
×∑_m=0^M/2(-1/4)^mH_2m + l(x/√(2)σ)/m!.
Analog to the reconstruction (<ref>), the differentiating MPO for the l-th derivative in the HDAF formalism is
K̂^(l) = Δ x δ_M^(l)(0; σ) 𝕀
+ ∑_i=1^2^n - 1Δ x δ_M^(l)(iΔ x; σ) (Σ̂^+i + (-1)^lΣ̂^-i),
where the symmetry or antisymmetry of δ_M^(l) has been used for l even or odd, respectively.
Attention must be paid to the fact that the l-th derivative has a prefactor σ^-(l+1) with σ = O(Δ x). Since these operators will be used within a finite-precision numerical framework, round-off errors can significantly impact the accuracy of the differentiation. These errors arise from the limited significant digits that computers may represent, inducing small approximation errors amplified by large weighting prefactors. In particular, these deviations dominate when Δ x is too small, thus limiting how dense the numerical grid can be.
The round-off error amplification problem is expected in numerical differentiation techniques but can be mitigated.
In the particular MPS-MPO framework, differentiating operators can be discretized to accommodate a certain number of qubits and then be adapted to a denser grid, where identities are added as new sites to account for each additional qubit. This approach is equivalent to nearest-neighbor interpolation. It keeps the round-off errors constant and does not introduce extra complexity to the MPO.
Figure <ref> accounts for the differentiation accuracy of HDAF and finite differences in approximating the second derivative of a Gaussian function. In the case of the HDAF, the convergence is faster in the number of qubits for larger values of M, as expected. However, HDAF and finite differences suffer from relevant round-off errors for many qubits. Once the optimal accuracy is achieved for a certain number of qubits, the accuracy of the operators can be retained while acting on a finer grid. Round-off errors are effectively kept constant by following the procedure above.
§.§ HDAF free propagator
The HDAF scheme is very powerful, as it cannot only approximate derivatives but also general functions of those derivatives. The first and one of the most relevant applications of this idea, posed in Ref. <cit.>, is the
banded approximation of the free propagator.
From Eq. (<ref>), taking the differential operator D[∂/∂ x] to be the free propagator,
T(τ) = exp(-iτ/2∂^2/∂ x^2),
the kernel for the approximation is T(τ)δ_M(x - x'; σ).
This quantity is readily computed from Eq. (<ref>) since T(τ) commutes with the derivatives and it spreads a Gaussian function to another,
T(τ)exp(-(x - x')^2/2σ^2) = [σ/σ_t]exp(-(x-x')^2/2σ_t^2),
mapping the original variance σ^2 to σ_t^2 = σ^2 + iτ. Then, using Eq. (<ref>) yields the free propagator kernel
δ_M(x - x'; σ, τ) = T(τ) δ_M(x - x'; σ)
= exp(-(x-x')^2/2σ_τ^2)/√(2π)σ_τ
×∑_m=0^M/2(-σ^2/4σ_τ^2)^mH_2m(x - x'/√(2)σ_t)/m!.
The MPO for the free propagator in the HDAF formalism is
K̂_τ = Δ x δ_M(0; σ, τ) 𝕀
+ ∑_i=1^2^n - 1Δ x δ_M(iΔ x; σ, τ) (Σ̂^+i + Σ̂^-i).
The kernel (<ref>) becomes complex and highly oscillating as time increases. Also, its width increases as a fundamental consequence of the free propagator. However, the spreading in the HDAF formalism is the minimum possible since it is inherited from the Gaussian generator of the Hermite polynomials <cit.>.
The HDAF approximation for the propagator has been used in many applications of split-step integration methods within the traditional vector framework. In that framework, K̂_τ is represented as a matrix acting on a discretized function. A central contribution in this work is to realize that the same matrix can be more efficiently represented as an MPO using the displacement operators Σ̂^± and additional simplification steps that significantly reduce the effective bond dimension of the operator. In this scenario, the MPO HDAF propagator is a competitive alternative to using MPO Fourier-based techniques <cit.>, directly representing the evolution operator in the coordinate representation.
§.§ HDAF metaheuristics
§.§.§ Free parameter election
Identical reconstruction.
The formulation of the HDAF operator (<ref>) as a discrete matrix has 2 sources of error: (i) The assumption that the function f(x) can be expressed as a polynomial of degree M under the extent of the Gaussian envelope of the HDAF, and (ii) the discretization of the convolution integral to a finite sum employing the midpoint rule. While the error (i) vanishes in the limit M/σ→∞, it is clear from (ii) that it is not possible to increase M or decrease σ indefinitely. A more oscillatory integrand will require a larger value of σ / Δ x for the Gaussian envelope to cover enough nodes and achieve satisfactory integration accuracy.
In general, for a fixed M, there is a value of σ/Δ x that makes the reconstruction optimal, and the larger is M, the better the maximum accuracy that can be achieved. The rationale behind this optimal relationship between M and σ/Δ x is that a perfect reconstruction happens when the M zeroes of the HDAF match with the zeroes on the grid, and only the term in the origin contributes <cit.>. One possible approach, therefore, is to set the discrete HDAF to be 1 at the origin <cit.>,
K_ii = Δ x δ_M(0; σ) = 1,
yielding,
σ_M = Δ x/√(2π)∑_m=0^M/2(-1/4)^mH_2m(0)/m!,
which makes the HDAF approximately vanish at integer multiples of Δ x <cit.>.
In practice, a lower bound to σ/Δ x is prescribed to ensure convergence of the midpoint rule when M is small. For the context of double floating-point precision, we heuristically set
σ / Δ x ≥ 3
⇒σ_min = 3Δ x,
and choose the value of σ according to
σ = max( σ_M, σ_min).
Differentiation.
Assuming that the l-th derivative of the function f(x) is accurately described with an HDAF of order M, i.e., also pertains to the DAF-class <cit.>, then the approximation to the derivative can be thought as a reconstruction of f^(l)(x) instead of f(x),
f^(l)(x) ≈∫ dx' δ_M^(l)(x - x'; σ) f(x')
= ∫ dx' δ_M(x - x'; σ) f^(l)(x').
In this spirit, the optimal value of σ is again computed using Eq. (<ref>), which does not depend upon the function to reconstruct, provided M is fixed.
Free evolution.
Since the action of the free propagator is to spread the original wavefunction, the width of the HDAF will not be a problem for the midpoint integration. For efficiency purposes, the election of σ is made such that the new width of the freely-propagated HDAF is the smallest possible <cit.>. This value follows from equation (<ref>). The leading Gaussian,
exp(-x^2/2(σ^2 + iτ)) =
exp(-x^2/2 w^2)
exp(ix^2/2w^2τ/σ^2),
has an effective variance w^2 = (σ^2 + τ^2/σ^2) with an optimal value σ = √(τ) that minimizes its spatial extent. Then, the value of σ is chosen
σ = max( σ_M, σ_min, √(τ)),
where σ_M and σ_min and are the same values (<ref>) and (<ref>) used for identical HDAF reconstruction.
§.§.§ Self-consistent error estimation
The HDAF filter (<ref>) can be analyzed in Fourier space. From equation (<ref>), the kernel spectrum has the analytical form
δ_M(k; σ) = exp(-k^2σ^2/2)∑_m=0^M/21/m!(k^2σ^2/2)^m.
The summation is a truncated series expansion of exp(k^2σ^2/2) up to order M/2. This expression is the basis to prove that the HDAF filter approaches a true Dirac delta distribution in the limits of an infinitely broad filter or an infinitely large polynomial basis,
lim_σ→ 0δ_M(k; σ) = lim_M→∞δ_M(k; σ) = 1 ∀ k∈ℝ,
thus identically preserving the function to reconstruct.
From equation (<ref>), one can also note that the Fourier expression of the HDAF is symmetric, bounded, and monotonically decreasing in k∈ℝ^+, with
1 = δ_M(0; σ) ≥δ_M(k; σ) ≥lim_k→∞δ_M(k; σ) = 0,
therefore acting as a low-pass filter. Moreover, it has been shown that δ_M(k, σ) is an almost-ideal low-pass filter with transition frequency
k^⋆ = √(M + 1) / σ,
and a transition region width that scales as O(M^-1/2σ^-1) <cit.>.
This behavior is depicted in Figure <ref> for varying M. There is a region of frequencies below k^⋆, the so-called DAF-plateau, such that δ_M(k; σ) ≈ 1. There follows a transition region centered around k^⋆ where the value of the filter smoothly vanishes, and then it indefinitely approximates to zero. As M is larger, the DAF plateau extends closer to k^⋆. Note that choosing σ from equation (<ref>) fixes the transition frequency k^⋆≈π/Δ x, which is the maximum frequency representable on a discrete uniform grid.
The HDAF reconstruction will be accurate for bandwidth-limited functions whose spectra lie within the DAF plateau, and any function with higher-frequency contributions will be smoothed. This suggests that this formalism is a good fit to use along with matrix product states since both techniques are especially suitable to represent bandwidth-limited functions <cit.>.
§.§.§ Evaluation of the HDAF coefficients.
All the HDAF operators presented here are generated as a combination of displacements and coefficients
K^(l)_τ = ∑_k=-2^n+1^2^n-1Δ x δ_M^(l)(kΔ x; σ, τ)(Σ^+)^k.
Moreover, the coefficients fulfill the general form
Δ xδ_M^(l)(x; σ, τ) = d_l∑_m=0^M/2h_m, l(x / √(2(σ^2 + iτ))),
with the definitions
d_l = (-1)^lΔ x/√(2(σ^2 + iτ))^l+1√(π),
h_n, l(x) = H_2n + l(x)exp(-x^2)c^n/n!,
c = -1/4σ^2/(σ^2 + iτ).
From the properties of Hermite polynomials, it follows that h_n, l(x) obeys the double recurrence relation
h_n+1, l(x) = 2c/n + 1[ x h_n, l+1(x) - (2n + l + 1)h_n, l(x) ],
h_n+1, l+1(x) = 2x h_n+1, l(x) - 2c(2 + l/(n+1))h_n, l+1(x),
that makes the calculation of (<ref>) efficient and accurate, starting from the initial values h_0, l(x) = H_l(x)exp(-x^2) and h_0, l+1(x) = H_l+1(x)exp(-x^2).
§.§.§ Effective summation bounds.
It was previously mentioned that equations (<ref>), (<ref>), (<ref>) and (<ref>) formally sum over all the grid points of x. In practice, however, the fast decay of the HDAF narrows the sum to a small subset of points around the origin. This subset can be further restricted since the filters are either symmetric or antisymmetric.
Only the highest power on the argument of δ_M^(l)(x; σ, τ) will contribute significantly to the value of the HDAF for x ≫ 0.
Let W be the smallest positive integer such that the coefficients contribute at most a predefined error tolerance ε,
|Δ x δ_M^(l)(WΔ x; σ, τ)| ≤ε.
This integer can be estimated tightly by replacing the sum in (<ref>) with the highest power term on WΔ x from the polynomial on h_M/2, l(WΔ x / √(2(σ^2 + iτ))), but it leads to a transcendental equation for W. Instead, we approximate the sum by its last term as a whole and use the following upper bound for the Hermite polynomials with complex argument <cit.>,
| H_n(z) | ≤√(2)^n√(n!)exp(√(2n)|z|), z∈ℂ, n∈ℕ.
While this bound tends to overestimate the polynomial, the Gaussian envolvent will dominate fast enough for this not to become a significant problem. Then, we find W by setting
|Δ xδ_M^(l)| ≲Δ x√((M+l)!)/√(2π)√(|σ^2 + iτ|)^l+1(M/2)!|σ^2/2 (σ^2 + iτ)|^M/2
×exp( WΔ x√(M + l/|σ^2 + iτ|))
×exp(- W^2Δ x^2/2(σ^2 + τ^2/σ^2))
= ε,
which reduces to the quadratic equation,
W^2Δ x^2/2(σ^2 + τ^2/σ^2) - WΔ x√(M + l/|σ^2 + iτ|) + lnε/η = 0,
with
η = Δ x√((M+l)!)/√(2π)√(|σ^2 + iτ|)^l+1(M/2)!|σ^2/2 (σ^2 + iτ)|^M/2.
The HDAF MPOs (<ref>) are obtained by summing over indices -W to W, where W is the closest integer from above to the solution of (<ref>). This sum contains 2W + 1 weighted displacement operators Σ̂^± k, where only W + 1 coefficients must be explicitly computed due to the symmetry δ_M^(l)(-x) = (-1)^lδ_M^(l)(x). Despite the number of summands, the resulting MPO will, in practice, be relatively simple, with a small bond dimension, for a reasonable choice of the HDAF parameters.
§ TIME EVOLUTION ALGORITHMS
Our goal is to solve the time-dependent Schrödinger equation (<ref>) with H=D(-∂_x^2)+V(x). The formal solution for this problem can be expressed as the repeated action of a possibly time-dependent unitary operator U(t) on an initial state ψ(x,t=0),
ψ(x,t)=U(t)ψ(x,0)=e^-iĤtψ(x,0).
In the particular framework of problems we are interested in, the state ψ(x,t) will be encoded using MPS/QTT, and we will use MPOs and finite-precision MPS algebra tools to approximate the unitary operator U(t) for brief time periods. More explicitly, the studies below will either use an MPO structure to encode the Hamiltonian H or create an MPO that directly approximates U(t). In both cases, the representations based on QTT/MPS will provide us access to exponentially dense grids with 2^n points in space, which may prove advantageous regarding vector representations.
To address the time evolution problem, the PDE operators described in Section <ref>
require global evolution schemes independent of the locality of interactions. MPS algorithms present many suitable alternatives, such as the time-dependent variational principle (TDVP) <cit.>, and
Taylor, Padé, and Arnoldi approximations of the evolution operator <cit.>. This section presents a selection of time-evolution methods with an MPO-MPS implementation: explicit (Euler, Improved Euler, and fourth-order Runge-Kutta) and implicit (Crank-Nicolson) Runge-Kutta methods, restarted Arnoldi iteration, and the split-step method. All previous methods are suitable for a finite difference and HDAF approximation of the differential operator.
Additionally, HDAF's propagator approximation enables the use of the split-step method.
§.§ Runge-Kutta methods
Runge-Kutta methods approximate the time evolution by a Taylor expansion of the state with a local error, i.e., one-step error, that scales algebraically with the expansion order m as O(Δ t^m+1). Let us describe some of the most representative variations.
1. Euler method. The simplest order one method
ψ_0 = ψ(x,t_0),
ψ_k+1 = ψ_k - iΔ t H ψ_k, k=0,1,…,N-1.
2. Improved Euler or Heun method. This method improves on the Euler method with a second-order error given by
ψ_k+1 = ψ_k - i Δ t/2[v_1 + H (ψ_k-iΔ t v_1)],
with v_1 = H ψ_k.
3. Fourth-order Runge-Kutta method. Finally, the well-known fourth-order scheme
ψ_k+1 =ψ_k + iΔ t/6(v_1+2v_2+2v_3+v_4),
v_1 = - H ψ_k,
v_2 = - H(ψ_k+iΔ t/2v_1),
v_3 = - H(ψ_k+iΔ t/2v_2),
v_4 = - H(ψ_k+iΔ t v_3).
This is one of the most commonly used methods in solving PDEs due to its balance in accuracy, stability, and simplicity.
4. Crank-Nicolson method. Implicit methods can increase numerical stability. The Crank-Nicolson algorithm is a second-order implicit method based on the trapezoidal rule that combines the Euler method and its backward version evaluated on the k and k+1 iterations, respectively. Thus, the state at the k+1 iteration is approximated as
(𝕀+iΔ t/2H)ψ_k+1=(𝕀-iΔ t/2H)ψ_k.
Matrix inversion methods may solve the system of equations in its matrix-vector implementation. Other approaches, such as conjugate gradient descent, can be extended for its implementation in an MPO-MPS framework.
§.§ Restarted Arnoldi iteration
Another alternative is to use Krylov subspace methods; more concretely, the restarted Arnoldi iteration adapted to the time evolution problem. These methods rely on the Krylov basis 𝒦_L = lin{|ψ_k⟩, H|ψ_k⟩,…,H^L-1|ψ_k⟩} to construct an approximation of the evolution.
The restarted Arnoldi iteration constructs a Krylov basis { v_i}_i=1,…,n_v of n_v elements and computes the matrices of the expectation value of the operator H and its norm—A and N, whose matrix elements are ⟨ v_i|H| v_j⟩ and ⟨ v_i| v_j⟩, respectively— to approximate the exact exponential evolution as
ψ_k+1=e^-iΔ t N^-1Aψ_k.
The error in approximating the exponential function is limited by the number of Krylov vectors, which scales with O(Δ t^n_v). This means that even a low number of vectors—n_v=5,10—can provide a highly accurate approximation. As a result, the cost of matrix inversion and exponentiation decreases significantly compared to the exact application of the H operator since the dimensions of A and N can remain constant with the system size, thus avoiding exponential scaling.
§.§ Split-step method
Split-step methods are based on an approximate decomposition of the Hamiltonian exponential as a product of exponentials that can be efficiently computed.
The first-order method relies on the Lie-Trotter product formula to approximate the evolution operator as
U(Δ t) = e^-i Δ t D(-∂_x^2) e^-iΔ t V(x) + O(Δ t^2).
Higher order expansions, such as the Suzuki-Trotter formulas <cit.>, enable a better approximation by decreasing the error scaling with the time step. A common alternative is the second-order approximation
e^-i Δ t (D(-∂_x^2)+V(x)) = e^-iΔ t V(x)/2 e^-i Δ t D(-∂_x^2)
× e^-i Δ t V(x)/2 + O(Δ t^3).
This decomposition generates the Störmer-Verlet integration scheme, the most common and one of the simplest members of the class of symplectic integrators <cit.>. These integrators are designed to preserve the system's energy and are especially well-fitted for conservative long-time evolution problems. While most implementations rely on second-order accuracy in Δ t, higher-order schemes can be constructed at the expense of introducing a larger number of exponential operators <cit.>.
Usually, the split-step (<ref>) relies on two Fourier transforms to compute the action of the free propagator exp(-iΔ t D(-∂_x^2)), but the HDAF formalism permits the efficient alternative of applying the free-propagator approximation (<ref>) directly in the coordinate representation.
The potential propagator exp(-iΔ t/2V(x)), diagonal in the coordinate basis, is approximated using TT-cross interpolation as implemented in Ref. <cit.>.
While the propagator for the harmonic potential can be cast analytically as an MPO, other potentials such as Eq. (<ref>) do not have an exact representation and must be computed numerically.
§.§ One-step study
All methods described before must be iteratively applied over small intervals Δt to compose the whole evolution of a quantum state ψ(x,t). A study and comparison of all methods over one such integration step is a good proxy for understanding the performance and accuracy of the algorithms over longer simulations.
The benchmark problem in this study will be the evolution of a quenched state under the Hamiltonian (<ref>). This analytically solvable problem can be used to estimate the errors in the wavefunction for different algorithms, grids, and time steps. Our simulation studies the expansion of a quantum state as we reduce the trapping frequency of the harmonic potential by a factor ω_H/ω_0=0.01. This leads to a 100-fold quantum state expansion, increasing its standard deviation from σ_0 to σ_max=100σ_0. The simulation domain in position space is designed to capture the wavepacket at its maximum expansion. This means we will represent functions over x ∈ [-L/2, L/2) with L = 16σ_max. Since the initial state is derived from a tightly confined potential, the initial wavefunction is narrowly concentrated around x=0, which sets a lower bound for the grid discretization and number of qubits to accurately represent the initial and final state. Note that loading a function that is mostly zero outside a narrow interval is a tricky task for the MPS, requiring us to pad the function with zeros to compensate for the possible sampling or representation errors of TT-Cross or Chebyshev methods on the tails of the exponential.
The accuracy and performance of the algorithms are gauged using three figures of merit: (i) the function norm-2 difference
ε =√(∑_i |ψ(x_i,Δ t)-ψ̃(x_i,Δ t)|^2Δ x),
measures the accuracy of the methods—ψ(x_i,Δ t) is the analytic solution for t=Δ t, and ψ̃(x_i,Δ t) is the one-step state approximated by the method—; and (ii) the run time and (iii) maximum bond dimension χ_max determine their cost.
The one-step study must separately study the time and spatial discretizations to independently understand the performance of the numerical integration and the PDE representation methods. Regarding spatial discretization, section <ref> already demonstrated the exponential advantage of HDAF methods compared to finite differences in approximating the derivatives. With this idea in mind, the first study uses a grid with n=18 qubits (262144 points) to explore the different integration techniques. The finite difference approximation uses filter nine in Ref. <cit.> to enhance noise suppression, and we fix a value M=40 for the HDAF approximation. Both techniques use periodic boundary conditions.
Figure <ref> shows the error ε (<ref>) and the run time scaling with Δ t for finite difference and HDAF derivative approximation for all methods. The scaling has saturated to the theoretical one for each time evolution method using HDAF approximation (Figure <ref>(b)), achieving numerical precision only limited by the MPS truncation. However, the truncation error of the finite difference approximation limits the accuracy of the evolution in Figure <ref>(a), leading to an error above the plateau of the HDAF implementation. Appendix <ref> shows the exact numerical scaling of ε with Δ t for these numerical simulations.
As shown in Figures <ref>(c)-(d), the implementation cost of HDAF and finite-difference algorithms is similar. The explanation for this is that the cost is dominated by the bond dimensions of the MPS states, which are similar for both algorithms. Given this and the exponentially improved accuracy of HDAF operators, we will, from now on, focus on the HDAF spectral methods and abandon the finite difference approximations entirely.
All evolution algorithms involve a trade-off between accuracy—i.e., the approximation order—and the cost of the implementation, which Figures <ref>(b)-(d) accurately describe. In the HDAF implementation, all methods converge with the number of qubits in terms of the error ε. The methods show an algebraic dependence of the error ε with the step Δ t as predicted by the theory, except for small time steps, limited by the numerical accuracy of the MPS implementation. The Arnoldi n_v=10 method is the best-performing one in terms of accuracy since it enables the largest step sizes due to its higher order. The Runge-Kutta, Arnoldi n_v=5, and split-step methods reach similar accuracies for a considerable Δ t range. The smaller order of the Euler method and the conjugate gradient implementation in the Crank-Nicolson method limit the error obtained beyond the intrinsic MPS truncation error.
To identify a method with a suitable cost-accuracy trade-off, it is essential to analyze the error ε in conjunction with the run time (Figure <ref>(d)). Despite the Arnoldi n_v=10's low error, its run time is two orders of magnitude slower than the split-step method. Consequently, even with smaller time steps Δ t, the split-step method demonstrates a better balance of cost and accuracy. Additionally, practical implementations often do not require such high accuracy, allowing for larger time steps. Thus, the split-step method with the HDAF approximation of the propagator is the optimal choice for studying the expansion problem.
Let us compare the performance of MPS methods to their vector implementation using split-step methods and the state-of-the-art fast Fourier transform (FFT). Figure <ref> analyzes the scaling of split-step MPS and vector-based implementations with the discretization size. The MPS method examines various values of the truncation tolerance for the SVD and the simplifications of the finite precision algebra, measured as the norm-2 difference ‖ψ-ϕ‖^2 between the original state |ψ⟩ with bond dimension χ_ψ and the one projected in the subspace of MPS with bond dimension χ_ϕ, MPS_χ_ϕ, such that χ_ϕ<χ_ψ.
Figure <ref>(a) demonstrates that the MPS tolerance dominates the split-step error. Achieving numerical precision comparable to the vector implementation requires tolerances of order O(10^-28) or smaller. This error scaling with the number of qubits demonstrates that a discretization with Δ x ≈ 10^-1 suffices for HDAF to converge well above the finite difference approach. Regarding the run time (Figure <ref>)(b)), the FFT is more efficient than the vector-based HDAF, and MPS perform asymptotically better than vector approaches, as they present an exponential scaling of time with the number of qubits due to the exponential increase in the number of points. The run time of the MPS method is similar regardless of the tolerance since all resulting states achieve a bond dimension of a similar order. Since tolerances below 10^28 overestimate the bond dimension needed, we choose this value for the SVD truncation in previous and future simulations while keeping a tolerance of the order of the numerical precision for MPS simplification. Finally, Figure <ref> shows that the MPS error ε scaling with the time step Δ t presents a similar behavior to the vector-based implementation for tolerances smaller than 10^-28. As the tolerance increases, it limits the accuracy for smaller time steps, for which the MPS truncation error dominates over the split-step truncation error. As the time step increases, the error associated with the method allows for larger tolerances.
§ QUANTUM QUENCH EVOLUTION
After discussing the MPS-based algorithms for solving time-dependent PDEs, this section demonstrates the utility of the methods in a physical application. The problem under study has been presented in Section <ref> and consists of the expansion of a quantum particle in a broad potential, which may be harmonic, as in Section<ref>, or present a double-well structure (Section <ref>). The study will focus on the algorithms that show the best balance between performance and accuracy, namely the split-step methods implemented with a fast-Fourier transform in the vector case and with HDAF in the MPS/QTT simulations.
§.§ Harmonic expansion
The first set of simulations addresses a similar problem to Section <ref>, studying the expansion of a particle in a harmonic potential (<ref>) that is 100 times weaker than the trap that confines the initial wavepacket. The simulation results in a 100-fold expansion of the wavefunction, which is properly captured with a discretization using 2^20 = 1048576 points or n=20 qubits for x∈[L/2, L/2) with L=16σ_max. The expansion starts with the state of the original harmonic oscillator with frequency ω_0=1, which is a real-valued Gaussian function. The potential is instantaneously weakened, relaxing the trapping frequency to ω_H=0.01. This results in an acceleration of the wavepacket, an expansion that lasts until t_f=0.5π /ω_H, at which the Gaussian solution (<ref>) reaches its maximum width. Let us consider three values for the time steps Δ t = 0.01,0.1,1 and a final time t_f=158, so it fits all time steps. As in Section <ref>, x∈[L/2,L/2) with L=16σ_max.
Figures <ref>(a)-(b) present the scaling of the error ε (<ref>) with the time t of the evolution for the MPS HDAF and vector FFT implementations of the split-step method, respectively. Note how the errors follow algebraic laws with very similar coefficients (see Appendix <ref>) that we attribute to the Störmer-Verlet integration scheme used: The error growth in time is consistent with the linear accumulation expected from a symplectic integration algorithm on a periodic Hamiltonian system <cit.>, as opposed to the quadratic law expected from general non-symplectic integrators such as Runge-Kutta on the same scenario <cit.>. The saturation of the global error could be explained since symplectic integrators keep the energy errors bounded at all times <cit.>, also implying a bound on phase-space errors for conservative systems with periodic orbits. These properties make the split-step method very suitable for long-time simulations.
The run time scaling (Figures <ref>(c)-(d)) is close to linear for the FFT algorithms, as the problem size fixes the cost of each step, and the total run time is the sum of individual evolution steps. In contrast, the run time of the MPS simulation is slightly above linear, a fact that can be explained by the growth of the bond dimension as the wavefunction expands, leading to an increase in memory size and also in the cost of various MPS and MPO-MPS operations.
The MPS algorithms' run time depends on the state's bond dimension at each time. Figure <ref>(b) shows the maximum bond dimension χ_max for each time step. The numerical methods exhibit a similar behavior for the bond dimension of the solution in time, compared to the exact evolution, i.e., the bond dimension increases with the absolute value of the phase β(t) of the analytic solution (<ref>), with the lower bond dimensions associated to the initial and final states, which are real Gaussians. This behavior appears due to the chirping of the wavefunction, which is inherent to the physical setting and does not depend on the numerical method used, up to some precision allowance.
However, the bond dimension of the exact solution acted mainly as an upper bound to the bond dimension of the evolution we found, suggesting that the errors induced by our numerical methods deviated the solution not in a random direction but towards a more efficient MPS representation. This is consistent with the HDAF theory, where, before discretization, the approximations arise solely due to the attenuation of high frequencies. The HDAF operators seem to be well-suited to the MPS/QTT framework by relying on approximations that are a good fit for their formalism.
Figure <ref>(a) shows the pointwise error of the maximum width state for the particle expansion, |ψ(x,t)-ψ̃(x,t)|, where ψ̃(x,t) is the solution approximated by the numerical methods and ψ(x,t) is the analytic solution (<ref>). We observe that both implementations of split-step have similar error shapes that differ on the extremes of the interval for the larger step size, possibly due to errors associated with the MPS representation.
Let us focus on a concrete case to study the evolution of the wavepacket. Figure <ref> depicts the evolution computed using the MPS HDAF split-step method for Δ t = 0.1. As predicted by the analytic solution (<ref>), the harmonic potential (Figure <ref>(a)) induces an expansion of the particle, which is depicted in Figure <ref>(b).
§.§ Double well potential
The calibration of simulation conditions from section <ref> allows us to make informed decisions on the split-step algorithm, spatial discretization, time steps, and truncation errors. Let us now use this information to discuss a more interesting problem: the expansion of a nanoparticle in an anharmonic double-well potential. This problem is equivalent to a “double-slit” experiment for the particle, which will ideally spread into a coherent superposition of both halves of the trapping potential, and it constitutes a similar application to other problems studied in levitodynamics <cit.>. For this particular simulation, the double-well potential is an open harmonic trap divided by a small Gaussian perturbation (<ref>), using u=1 and σ=1, and a trapping frequency that is once more 100 times smaller than the initial particle trap. The potential is depicted in Figure <ref>(a). As in the previous case, the harmonic term has a frequency ω_H<ω_0, which weakens the confinement and expands the particle. The Gaussian term is repulsive since u>0, separating the larger potential in two wells, with a barrier around x=0.
For this simulation, the final expansion time chosen is t_f=1000, which surpasses one period of the harmonic quantum quench solution T=π/ω_H. This larger number is designed to explore multiple cycles of expansion and contraction of the particle's wavefunction to reveal collapse and revival dynamics. Figure <ref> shows simulation results, both from the particle's dynamics perspective and the wavefunction's complexity. During the evolution, the wavefunction's density |ψ̃(x,t)|^2 (Figure <ref>(b)) evolves according to the interplay of both terms in the potential (<ref>). The Gaussian term induces a separation in the particle's probability density. In contrast, the harmonic term preserves the confinement. It determines the period of the evolution, where the time of the maximum harmonic expansion t_f=0.5π/ω_H≈ 157.1 coincides with the maximum spread of the particle. The new term modifies the behavior of the harmonic potential (<ref>), which is depicted in Figure <ref>(c). As expected, the harmonic potential induces cyclic expansion of the wavepacket, and the added Gaussian term leads to a barrier in the potential that divides the probability density into two localization peaks traveling in opposite directions.
The Gaussian barrier also modifies the behavior of the state's bond dimension. It is no longer cyclic, like in the harmonic case (Figure <ref>(d)), but instead seems to saturate, leading to a decrease in the linear scaling of the run time as the system evolves.
§ CONCLUSIONS
This work has introduced an HDAF encoding of differential operators for PDEs in an MPS/QTT framework. This encoding has shown exponential accuracy and low resource scaling.
The MPS HDAF encoding enables the design of quantum-inspired time evolution algorithms to solve time-dependent PDEs: explicit and implicit Runge-Kutta methods, restarted Arnoldi iteration, and a split-step method. In particular, the split-step method benefits from the approximate representation of the free propagator unitary operator, which standard finite difference schemes cannot efficiently approximate.
The time evolution methods combined with HDAF overcome their finite difference implementation in terms of accuracy while maintaining a similar cost. Additionally, the split-step method shows the best trade-off in accuracy and cost. The HDAF time evolution algorithms are also competitive with state-of-the-art vector representations, enabling exponentially efficient encodings of functions and moderate overheads in the simulation.
The expansion of a particle in a broad potential acts as a benchmark for the methods. This poses a challenging problem since it defies the MPS representation due to the appearance of a chirp and rapid oscillations in the phase. Despite this chirp, the MPO-MPS algorithm produces accurate results with moderate bond dimensions and adequate run time.
Note that while the MPS and FFT speeds are comparable, only the MPS algorithm can scale up these grid densities to more dimensions. We expect to further develop and optimize these routines for higher-dimensional problems, accelerating them through C/C++ backends and other low-level optimizations to improve our Python programs' run time prefactors.
The present implementation is based on the SElf-Explaining Matrix Product State (SeeMPS) library for Python [<https://github.com/juanjosegarciaripoll/seemps2>].
§ ACKNOWLEDGMENTS
The authors would like to thank Juan José Rodríguez-Aldavero for his help with implementing the TT-cross interpolation. This work has been supported by Spanish Projects No. PID2021-127968NB-I00 and No. PDC2022-133486-I00, funded by MCIN/AEI/10.13039/501100011033 and by the European Union “NextGenerationEU”/PRTR”1. PGM acknowledges the funding by “FSE invierte en tu futuro” through an FPU Grant FPU19/03590
and by MCIN/AEI/10.13039/501100011033. JJGR and PGM acknowledge support
from CSIC Interdisciplinary Thematic Platform (PTI) Quantum Technologies (PTIQTEP+).
JG was supported by the Chilean National Agency for Research and Development (ANID-Chile), program “Doctorado Nacional 2020”, scholarship No. 21202616.
The authors also gratefully
acknowledge the Scientific Computing Area (AIC), SGAI-CSIC, for their assistance
while using the DRAGO Supercomputer to perform the simulations.
§ AUTHOR CONTRIBUTION STATEMENT
JG developed the extension of HDAF to the MPS formalism, its theoretical and numerical study, and the code implementation. PGM introduced the HDAF operators in the time evolution schemes and conducted their numerical characterization in the one-step and long-time simulations. JJGR conceptualized this work and its research goals. JJGR and LT supervised the research, providing solutions to challenges encountered throughout the process. JG and PGM wrote the original draft, and JJGR reviewed and edited the manuscript.
§ ONE-STEP Ε SCALING WITH Δ T
Figures <ref>(a)-(b) show a fit for the error ε with Δ t for the methods in Section <ref>, for the finite difference and HDAF derivative approximation of the derivative, respectively. Tables <ref> and <ref> contain the concrete numerical data of the fit ε=CΔ t ^m. We use a piecewise linear fit and show the data for larger Δ t, which are not limited by the MPS accuracy.
§ HARMONIC QUANTUM QUENCH EVOLUTION SCALING
Figure <ref> shows the error ε scaling and run time of the harmonic quantum quench evolution with time. Tables <ref>-<ref> contain the concrete numerical data of the fits ε = Ct^m and T = Ct^m, where T is the run time.
|
http://arxiv.org/abs/2409.03439v1 | 20240905114208 | KiloBot: A Programming Language for Deploying Perception-Guided Industrial Manipulators at Scale | [
"Wei Gao",
"Jingqiang Wang",
"Xinv Zhu",
"Jun Zhong",
"Yue Shen",
"Youshuang Ding"
] | cs.RO | [
"cs.RO",
"cs.AI",
"cs.PL"
] |
Purification of Gaussian States by Photon Subtraction
Mattia Walschaers
September 9, 2024
=====================================================
§ ABSTRACT
We would like industrial robots to handle unstructured environments with cameras and perception pipelines. In contrast to traditional industrial robots that replay offline-crafted trajectories, online behavior planning is required for these perception-guided industrial applications.
Aside from perception and planning algorithms, deploying perception-guided manipulators also requires substantial effort in integration.
One approach is writing scripts in a traditional language (such as Python) to construct the planning problem and perform integration with other algorithmic modules & external devices. While scripting in Python is feasible for a handful of robots and applications, deploying perception-guided manipulation at scale (e.g., more than 10000 robot workstations in over 2000 customer sites) becomes intractable.
To resolve this challenge, we propose a Domain-Specific Language (DSL) for perception-guided manipulation applications. To scale up the deployment, our DSL provides: 1) an easily accessible interface to construct & solve a sub-class of Task and Motion Planning (TAMP) problems that are important in practical applications; and 2) a mechanism to implement flexible control flow to perform integration and address customized requirements of distinct industrial application.
Combined with an intuitive graphical programming frontend (Figure. <ref>), our DSL is mainly used by machine operators without coding experience in traditional programming languages. Within hours of training, operators are capable of orchestrating interesting sophisticated manipulation behaviors with our DSL.
Extensive practical deployments demonstrate the efficacy of our method.
§ INTRODUCTION
The wide availability of RGBD cameras provides robots with powerful 3D sensing capabilities. As a result, robots equipped with these sensors are entering industrial production to handle unstructured environments.
As the task environment is not static, and manipulated objects in these applications are perceived from cameras, the robot behaviors must be planned online (instead of crafted offline). This type of behavior planning problem contains elements of discrete decision-making and continuous motion generation, and is denoted as Task and Motion Planning (TAMP). Extensive contributions <cit.> have been made regarding this topic and many open-source packages are available. Please refer to <cit.> for a detailed review.
Despite these excellent contributions, deploying perception-guided manipulators requires substantial effort in integration. This is the procedure of: 1) constructing the TAMP problem as the input to the planner; 2) putting different modules (e.g., perception) together; and 3) implementing control flows to address customized requirements of industrial applications. Writing integration scripts in Python is feasible for a handful of applications.
However, planning problems and control flows in these scripts are tightly coupled with field works such as hardware setup, tuning for algorithm parameters && movement targets, sensor calibration, and communication with external devices. Thus, the integration code among different applications is almost non-reusable. Consequently, close collaboration between a programmer and a field application specialist is required for deploying each application, which is time-consuming and expensive.
The “scripting" approach mentioned above in a traditional programming language has limited scalability. Thus, we propose a new Domain-Specific Language (DSL) to reduce the integration effort and scale up the deployment. Regarding the front end, we design an intuitive graphical programming interface mainly for field application engineers or machine operators without experience in traditional programming languages (Python/C++). This is inspired by many existing graphical programming languages <cit.> intended for education and entertainment that target people without coding experience. As shown in Figure. <ref>, users can orchestrate the robot behaviors by drag-and-drop programming to construct a control-flow graph (detailed in Sec. <ref>).
The backend of our DSL is responsible for running the TAMP algorithm and executing the user-defined control flow. We propose a novel interface between the DSL and the planning algorithm: users only need to craft a “skeleton” of the desired robot behavior, where the skeleton might contain a set of parameters (both discrete and continuous) not determined offline. Then, the planning algorithm generates these missing parameters during online execution. In this interface, users do not need to understand the TAMP algorithm or explicitly implement the planning problem description (in a modeling language like PDDL). Thus, the interface is user-friendly even for machine operators without coding experience.
The contributions of this paper are as follows: 1) we design a novel DSL for machine operators without coding experience to deploy perception-guided robot manipulation applications. 2) We propose a novel interface that implicitly integrates a customized TAMP
problem description and planning algorithm into our DSL. The interface is user-friendly, and the TAMP planner meets the performance requirement for deployment. 3) Inspired by pipelining in modern CPU, we introduce a “pre-planning” interpreter for our DSL that interleaves robot movement with planning (for future robot behaviors). This pre-planning mechanism significantly improves the manipulator's throughput (# of pick-place per minute). 4) We conduct extensive tests of our DSL during the deployment of more than 10000 robot workstations worldwide. Please visit https://www.mech-mind.com/https://www.mech-mind.com/ for more examples.
This paper is organized as follows: in Sec. <ref> we introduce related works. Sec. <ref> presents the preliminaries. Sec. <ref> presents the design of our DSL and the interface with TAMP planner. Sec. <ref> introduce the implementation of interpreter and the pre-planning mechanism. Sec. <ref> shows the results. Sec. <ref> concludes.
§ RELATED WORK
§.§ Robot Offline Programming Software
Robot offline programming <cit.> means generating robot programs in a virtual environment based on 3D CAD data (instead of pendant teaching). Offline programming software is typically equipped with advanced collision detection and motion planning algorithms to generate robot movement for various industrial applications, such as welding, coating, dispensing, and robot milling. Users can inspect the generated movement in the integrated simulator of the software. Once the movement is verified, it can be downloaded to the physical robot for online execution.
Offline programming softwares <cit.> have been extensively deployed in practice and they accomplish many challenging manipulation tasks. However, the offline-generated robot movements cannot adapt to dynamic or unstructured working environments, where the manipulated objects must be perceived online. Our system is proposed to resolve this limitation using online perception and planning pipelines.
§.§ Integrated Task and Motion Planning
As described in <cit.>, task and motion planning (TAMP) is the problem of finding actions of a robot that moves itself and changes the state of the environment objects. TAMP contains elements of discrete task planning and continuous motion planning. Extensive contributions <cit.> have been made on strategies to solve TAMP problems. Many of these algorithms require an internal planner to solve the joint-space collision-free motion planning problem. The most effective methods are based on sampling <cit.> and trajectory optimization <cit.>.
Our work is built upon these excellent contributions regarding the formulation and planning algorithms of TAMP problems. Actually, one prominent feature of our DSL is to serve as an interface layer that converts user control-flow graph (with undetermined parameters) into a series of problem descriptions that can be solved by existing TAMP algorithm <cit.>.
§.§ Manipulation Pipelines
Researchers have created robot manipulation pipelines <cit.> with interesting capabilities. These methods typically integrated various perception and planning modules to achieve intelligent manipulation behaviors. Some pipelines accept inputs from other modality, such as language <cit.> or tactile sensors <cit.>.
Compared to these excellent works, our DSL is designed to address different challenges. The manipulation behavior programmed by our DSL is typically much less innovative than these works. On the other hand, we would like to achieve deployment at scale by reducing the integration cost and address customized requirements in industrial applications that are diverse, application-specific and tightly coupled with field work.
§ PRELIMINARY
§.§ Control Flow Graph
Our DSL, as well as several existing programming languages with graphical frontends, enables users to explicitly construct control flow graphs by drag-and-drop operations. In this sub-section, we present an overview of the control flow graph in this context and compare it with Python, a traditional interpreted programming language.
A control flow graph is a “node and edge” representation of the user program, an example is shown in Figure. <ref>. Each node in the control flow graph roughly corresponds to a statement in Python. The statement can perform read and/or write operations to the variable map (environment), which is a map from variable name string to variable value. A statement can also produce side effects, such as writing a file or sending a message. In the following text, we would use to denote nodes, for example . We omit the when the context is clear.
A directed edge in a control flow graph implies the execution order of different nodes, which roughly corresponds to the statement in Python. Thus, edges in a control flow graph can be used to implement branch and loop structures, which corresponds to in Python. An example is shown in Figure <ref> (b), where the directed edge implements a loop.
Similar to Python, nodes in a control flow graph can be organized into routines (or functions). A routine is also a node graph that can be invoked by another routine. An example is shown in Figure <ref> (c). A routine has exactly one entry, which is the node. A routine has one or more exits, which is marked by nodes. One particular routine, denoted as the main routine, is the global entry of the entire program. Except for the node, every node has exactly one in-port. Similarly, every node has one or more out-port except for node.
§.§ Specialization for Pick-and-Place Manipulation Applications
Our DSL can be regarded as a control flow graph with interface for constructing and solving TAMP problems. This interface is discussed in Sec. <ref>. In this subsection, we present several design decisions not directly related to planning, which serves as the background for further discussion.
Frontend and backend:
Our implementation of the DSL is separated into the frontend and backend, and both of them are represented as control flow graphs. The frontend, illustrated in Figure. <ref>, is designed for user-friendliness with a lot of “language sugar”. The frontend control graph, as a serializable representation of the user program, can be converted into a backend control flow graph for execution. The discussion in the following text mainly focuses on the backend.
Variable mechanism: Table. <ref> summarizes several important variables in our DSL. In addition to these, user can define their own variables (e.g., by node) and make arbitrary mutations to them. The backend provides a node, which contains a pure C++ functor () that takes the variable map as input and produces a list of mutations to that variable map. Nearly all frontend nodes that are unrelated to planning are converted into this node in the backend, such as the in Figure. <ref>. Moreover, advanced users that are capable of programming can implement their own fuctors and insert them into the control flow graph by this node, which is used to address very complex application requirements.
In our DSL, all variables are global. In particular, there are no local variables for a routine. The information exchange between the routine caller and callee is achieved by reading and/or writing of global variables. This design decision is made because our DSL does not aim at complex control flows. Sophisticated algorithms and operations are either embedded in the behavior planner or provided as pre-defined nodes for users to drag-and-drop.
External communication: Our DSL communicates with external algorithmic modules and devices through Remote Procedure Call (RPC). Our discussion would be restricted to synchronized RPC for simplicity, while asynchronous RPC is used by default in the backend for efficiency.
A node can be used by programmers to invoke a RPC service. This node has the following behavior: 1) find the service from registered ones by the ; 2) Pack and send the request message, which contains meta info (e.g., timestamp and message ID) and optionally a serializable variable identified by ; and 3) wait for the response message synchronously (blockingly), and save the response message to a variable named .
For example, the entire perception stack is an RPC service in our pipeline. This includes invoking the camera to take an image, running a series of perception algorithms (object detection, pose estimation, occlusion detection, grasping pose generation), and sending the result back to the RPC caller. By default, the response variable of perception service is . RPC is also used to communicate with other algorithmic modules (e.g., palletization pattern generation) and external devices (e.g., conveyors).
§ INTERFACE WITH ROBOT BEHAVIOR PLANNING
A TAMP algorithm is integrated into our DSL to alleviate the users' burden of making discrete decisions and crafting robot trajectories. We propose the following interface between the language and the planner: some pre-defined nodes are used to specify a “skeleton” of desired robot behavior, and they are intentionally undetermined offline. The planner converts these skeleton nodes into executable nodes by providing them with a set of discrete and/or continuous parameters. For notational clearance, we use online-parameters for a given node to denote its parameters generated by behavior planning. This is in contrast to user-parameters of nodes mentioned in Sec. <ref>, which are explicitly provided by the users.
For example, the node has one user-parameter: a string indicating the counter variable name to be increased. It does not need an online-parameter as it is not involved with planning. On the other hand, for movement nodes (detailed in Subsec. <ref>), online parameters are the planned joint-space trajectories and the safety certificate of the trajectories.
It is emphasized that online-parameters are not visible to users. They are part of the runtime data used to execute nodes that need planning. Thus, to make the DSL user-friendly, we only need to simplify the user-parameters of nodes.
In the following subsections, we describe this interface in detail. Subsec. <ref> gives an overview with an illustrative example. Subsec. <ref> and Subsec. <ref> present nodes for movement and pick-place behaviors, respectively. Subsec. <ref> describes the integration of planning into the control flow of our DSL.
§.§ Illustrative Example
In this subsection, we present an overview of our DSL using a schematic example, as shown in Figure. <ref>. Suppose the perception service provides a scene with two objects (A and B), each object has two possible grasping poses (in dash lines), as shown in Figure. <ref> (a). The user program for a pick-place manipulation is shown in Figure. <ref> (b), with user-parameters annotated for each node.
The node (1) moves the robot from its initial configuration to a new configuration on top of the container. This node needs a target joint position as the user-parameter. It also requires a user parameter to specify how the robot should reach its target (e.g., straight line or RRT-generated path). To provide a clear presentation, this user-parameter is omitted in Figuire <ref>.
The node (2) has the following behavior intuitively: 1) move the robot to a configuration that can grasp an object; and 2) pick up the object by attaching it to the robot end-effector. In the simplest form, the only user-parameter for this node is the name of the perception service response variable ( by default). The planner automatically figures out which object to pick, what is the optimal grasping pose and how to reach the robot-picking configuration as the online-parameters. Thus, users only need to provide high-level supervision, while the detailed decision-making and trajectory crafting are handled by the planner. Additional user-parameters can be used to guide the decision-making, as detailed in Subsec <ref>.
The node (3) and node (4) are both movement nodes that move the robot to new configurations. The node (3) applies a constraint between the end-effector poses before and after this node, and it is used to lift the object in this example. The user-parameter of node is the relative transformation that defines the constraint. The node (4) moves the robot to a joint configuration such that the picked object is at the user-specified pose. In its simplest form, this node only needs the object pose as the user-parameter. Alternatively, users can provide a map from object type to pose. Thus, this node would move the picked object to different poses according to its type.
The node (5) places the picked object(s) by detaching it from the robot end-effector and re-attaching it to the world. This node does not need an user-parameter or online-parameter. However, this node is involved in planning because it changes the geometry attachment and affects the collision checking of future movement nodes, such as the (6) in this example.
In this example, nodes (1)-(6) appear independent from each other. However, the underlying behavior planning must consider many nodes jointly, as these nodes' online parameters (both discrete or continuous ones) are coupled. For example, the selection of objects and grasping pose in (2) would affect the collision checking and trajectory generation of movement nodes (3-4) and node (5). An inappropriate picking decision, without considering the subsequent transferring and placement of the picked object(s), might lead to unavoidable collision or kinematic infeasibility.
To address this issue, we propose to give users the authority to specify nodes that must be planned jointly. In particular, a special type of routine, named plan-routine, defines the scope of one behavior planning problem. The example in Figure. <ref> (b) is a plan-routine with a special node (0). With this formulation, the interpreter understands the discrete decision for (2) must consider node (1) and (3-6). In our practice, a plan-routine typically contains one iteration of pick-place operation. As the plan-routine is the basic unit of behavior planning, it cannot contain arbitrary topology structures (e.g., no loop). For now, we assume the plan-routine is a simple sequence. This constraint is relaxed in Subsec. <ref>.
§.§ Nodes for Robot Movement
In this subsection, we describe nodes for robot movement. All movement nodes, for instance ones in Figure. <ref>, are defined by two generic user-parameters: the of the movement and the . Both user-parameters are used to generate robot trajectories during behavior planning, and they might induce various discrete decisions detailed below.
The user-parameter, provided by the human operator, specifies the high-level movement target, which would eventually be resolved as a fully determined joint target during planning. This high-level might be provided in many forms, such as:
* A robot joint target ( node in Figure. <ref>). No decision-making is required for this target.
* A pose target for the robot end-effector. For this target, the inverse kinematics is invoked which generates several solutions in joint space (6-DoF robots typically have 8 solutions), as illustrated in Figure. <ref> (c). The planner should evaluate the feasibility of these solutions and select the best one according to some metrics (e.g, minimum joint-space distance).
* A pose target for the picked object ( node in Figure. <ref>). One object pose target might be transformed into multiple end-effector pose targets due to the symmetry of the picked object(s), which occurs frequently in industrial applications. An illustration is shown in Figure. <ref> (f). The planner should attempt these end-effector pose targets and select the best one.
* A (discrete or continuous) set of pose targets for picked object(s) or end-effector. The most prominent example is the node, where the picked box(es) can be placed into multiple positions of a pallet. An illustration is shown in Figure. <ref> (e). The planner might need to consider various factors for this decision, such as the feasibility of future palletization movement.
* A target that depends on the intermediate output of the planner. For instance, a node that depends on the previous target or node that depends on the selected box(es) for picking. Generally, the movement target can be a function of previous/future movement targets, object properties, active tool and picking state. The planner should correctly resolve those dependency during planning.
On the other hand, the user-parameter specify how should the robot reach its target. The trajectory might be a simple straight line in joint/end-effector space, a selection from a trajectory library, or a complex trajectory generated by an advanced motion planner (e.g., RRT). This parameter also includes various user preference on the trajectory, such as collision option, movement speed configuration and singularity detection option. Moreover, a sequence and parameters can be received from RPC messages and decoded into a variable, which is the user-parameter of the node. This enables the robot to execute movements from external commands.
All the movement nodes have the same types of online-parameter: the planned robot joint-space trajectories and the safety certificate of the trajectories. Given the online-parameters, the execution of movement nodes would: 1) send the planned joint-space trajectory to the robot service (which is an RPC service) for execution; and 2) update the variable (Table. <ref>) to the final joint configuration of the generated trajectory.
§.§ Node for Object Picking
In this subsection, we formally describe the node introduced in Subsec. <ref>, which plays a critical role in our DSL. For robot picking, the robot needs to use various types of gripper tools (suction cup, parallel-jaw, etc), move to appropriate joint-space configuration and pick up one or several objects. The perception pipeline produces objects available for pick, the method (gripper tool index, picking pose wrt objects, digital-out ports) to pick up each object and other meta-info. After robot picking, those picked objects would be attached to robot end-effector, thus the planner must ensure picked objects are not in collision during subsequent robot movements.
As mentioned in Subsec. <ref>, has only one major user-parameter: the which identifies the perception service. Given the , the perception message that contains objects and grasping information can be found in the variable map (environment), as shown in Table <ref>. This node also needs a user-parameter. By default, an end-effector straight-line movement is used as . Additionally, the user might specify a set of filters to specify additional requirements for the grasping candidates according to various factors, such as object type, picking pose, and the number of picked objects.
The behavior planning needs to select the grasping from a set of candidates. That includes selecting the object(s) to pick, and the end-effector pose for the selected object(s). An illustration is shown in Figure. <ref> (a) and (b). Due to the symmetry of the objects and the gripper tool, there can be tens or hundreds of possible grasping poses for each object instance. Combined with tens of objects, the planner might need to attempt thousands of grasping candidates.
The induced computation can be rather expensive, as the picking decision must be jointly made with subsequent movement nodes, as shown in Subsec. <ref>. After making the picking decision, the planner also generates the robot trajectory that reaches the grasping pose.
Given the picking decision and reaching robot joint-space trajectories as the online-parameters, the execution of node would: 1) remove objects that are selected for picking from the perception result variable, and insert them into variable (after updating attachment and meta-info); 2) execute the movement to reach the picking configuration in the same way as movement nodes.
It is emphasized that the node does not involve the physical actions for robot picking behaviors. For example, picking up an object might require turning on the vacuum gripper or closing the parallel-jaw. To execute picking physically, nodes must be set up to send a control message to the gripper (which is an RPC service) or set a DigitalOut on the robot (if the gripper is connected to the robot). Thus, these physical nodes depend on hardware and connection configurations. Typically, the node for turning on the suction cups in a vacuum gripper is connected before , while node for closing the parallel-jaw is connected right after .
§.§ Structure of Plan Routines
In Subsec. <ref>, we discuss the plan-routine under the assumption that it is a simple sequence of nodes. In this sub-section, we relax this constraint by adding several types of branch nodes into the plan-routine, as illustrated in Figure. <ref>. These types of branch nodes can be converted into a set of node sequences during planning, thus the planning algorithm in Subsec. <ref> can be used to solve it. On the other hand, plan-routine cannot contain loops. This rule is enforced by static checking during the conversion from frontend to backend.
Exception branch: Some branch edges in a plan-routine are explicitly marked as exception by the user. Usually, these exceptions are abnormal behaviors due to unmodeled effects. These exception edges are simply ignored during planning, while they might take effect in execution. An illustration is provided in Figure. <ref> (a), the node is used to check whether the objects have been successfully picked up (e.g., by force sensors on the gripper). If the picking is not successful, nodes for error recovery are executed.
Each plan-routine has a built-in PlanFailure exception. nodes that invoke a plan-routine would have an out-port corresponding to this exception. Users might address the failure of behavior planning by various methods. For example, in Subsec. <ref> we use a vibration generator to create a random disturbance to objects in the container. Then, another image is taken and the planning is retried.
Branch determined before planning: Some branches can be determined during the construction of the planning problem. For example, in Figure. <ref> the branch selection at node (5) can be determined from the node (4). In general, determining the branch is a standard dataflow analysis problem, which is further simplified as plan-routines can not contain loops. A plan routine with this type of branch nodes can be converted into a node sequence, which is solved using the planning algorithm in Sec. <ref>.
Branch decided by the planner: As shown in Figure. <ref> (c), a node (9) is used to request the planner to make a selection of gripper tools. This decision must be made jointly with other nodes in the plan-routine, similar to other discrete decisions in Figure. <ref>. During planning, a plan-routine with nodes would be expanded into a set of node sequences. The node can be used to set up decision-making problems regarding arbitrary factors, such as object types, movement trajectories, and placement locations.
§ INTERPRETER IMPLEMENTATION
As mentioned in Sec. <ref>, our DSL introduces plan-routines into the control flow graph. These plan-routines contain intentionally undetermined nodes that require online-parameters, such as movement trajectories and/or picking decisions. As a result, the interpreter of our DSL is responsible for calling the planner for these plan-routines when they are invoked during execution. Aside from that, the interpreter of our DSL behaves the same as the ones in existing programming languages: executing each node one by one, updating the variable map (environment), and generating side effects. In this section, we present the implementation of the interpreter. Subsec. <ref> describes the planning algorithm. Subsec. <ref> presents a “pre-planning” mechanism to reduce the cycle-time of the manipulation pipeline.
§.§ Planning Algorithm
In this subsection, we present the algorithm that generates the online-parameters of the plan-routine. Our discussion is focused on the simple sequence, as branch structures presented in Subsec. <ref> can be converted into a set of sequences.
We use a specialization of the method in <cit.> as the planner. In particular, we formulate a hybrid state transition system following <cit.>. The state of this state transition system is a set of variables, such as ones in Table. <ref>. In addition to them, other problem-specific variables can also be included in the state, for example a variable is used to maintain the state (palletization pattern, packed boxes and next available slots) for palletization applications. The actions, which are converted from the nodes, define a set of constraints between the state before and after it. Using this formulation, the plan-routine becomes a set of “skeletons” described in <cit.>. Then, a series of conditional samplers, which implement basic primitives (e.g., inverse kinematics, grasping pose sampling, and motion planning), are composed into a constraint sampling network (Figure. 8 of <cit.>), which is used to generate online-parameters.
<cit.> also proposed algorithms that search for the skeleton (jointly with the online-parameters). As a result, many intelligent manipulation behaviors can emerge automatically, such as “moving away the surrounding obstacles before reaching the target”. However, searching for the skeleton can be expensive due to the large solution space.
In this work, we take a different trade-off with more emphasize on computational performance: the skeletons are provided by the human operators through the DSL.
This approach aims to maximize the inherent advantages of the human operators and the planner. We rely on the planner to operate over the domain in which it outperforms the human, such as accurate and fast numerical computation, while leaving tasks that require cognition, such as high-level supervision, to the human operator.
§.§ Pre-Planning Mechanism
The planning in Subsec. <ref> can be time-consuming due to the large solution space and expensive operations (e.g., the collision detection). When the perceived scene and planning-routine are complex, the robot might need to stop the movement and wait for the planning result. To alleviate this issue, we propose to interleave the planning for future nodes with the execution. This is fruitful because we can exploit the time spent on waiting for RPC responses, which can be long from some services (e.g., robot service and perception service). For example, while executing the plan-routine for pick-place iteration 1, we would perform planning (in a background thread) for iteration 2 or 3. Thus, when executing pick-place iteration 2 the online-parameters of nodes are ready. This mechanism is referred as “pre-planning” in the following text.
Consider the example in Figure. <ref>. For simplicity, we omit the routine structure and use the red dash-line block to imply nodes B and C are in a plan-routine. We assign a dynamic ID to each execution of a node. Nodes with annotated dynamic ID is shown in Figure. <ref> (b). In the first loop iteration, the planner generate online parameters for (B2, C3). Suppose nodes A1 and B2 have been executed, and we would like to perform planning for the plan-routine in the next loop iteration, namely nodes (B4, C5).
To perform planning for (B4, C5), we need the variable map “as if” node C3 is executed. To achieve this, we implement a interface for nodes in our DSL. This interface tries to update the variable map without generating side effect, and reports failure (which stops the pre-planning) if it is impossible. For several types of nodes, their interface would:
CallService: Mark the response variable as a special flag . Reference to the response variable during subsequent would report failure.
Movements: Update the variable without sending request to robot service.
MoveToPick: Update the , and variables.
FunctorVariableMutation: This node and nodes derived from it (e.g., ) behave the same as execution, if no referred variable is marked as .
ExceptionBranch: Select the first branch that is not marked as exception by the user, such as “OK” branch in Figure. <ref> (a).
OtherBranch: Similar to .
Using the interface, the interpreter would maintain another program counter and variable map for pre-planning. Then, the planning algorithm can be invoked (in a background thread) for future plan-routines using this variable map, and the planning results (online-parameters) can be directly used without waiting when these plan-routines are ready for execution. The pre-planning program counter and variable map would be reset to execution values, if 1) a node or the planner refers a variable marked as ; and 2) the guess in an exception branch node is wrong. This is similar to the pipelining and misprediction recovery in modern CPUs.
§ RESULTS
In this section, we first demonstrate a variety of industrially important applications that are implemented in our DSL, in Subsec. <ref>. These demonstration are achieved on several different hardwares regrading robot platforms, gripper tools, RGBD sensors and external devices. Then, we show the effectiveness of the proposed pre-planning mechanism in Subsec. <ref>. These examples are illustrated in the accompanied video. Please visit https://www.mech-mind.com/https://www.mech-mind.com/ for more examples.
§.§ Representative Examples
Mixed case palletization (a): The robot performs a mixed-case (consisting of multiple types of boxes) palletization, as illustrated in Figure. <ref> (a). The robot perceives the boxes using a camera and plans the robot actions that pick up a suitable box, transfer it and place it on the palletization. The decisions (e.g., selection of the box and the placement location) in this example must be made according to the desired palletization pattern that tries to maximize the space utilization rate.
Multiple pick de-palletization (b): The robot picks up boxes from a pallet and place them onto a conveyor, as shown in Figure. <ref> (b). To improve the throughput (# of boxes per hour), the robot might pick up multiple boxes at once. The pipeline detects currently available boxes using a camera, makes decisions about picking one or more boxes, and generates concrete picking behaviors and robot trajectories.
Recovery from planning failure (c): The robot picks workpieces and organizes them into a specific shape. During manipulation, the planner might fail to find a feasible pick-and-place behavior (e.g., the reaching movement collides with workpieces other than the picked one). The exception branch in Subsec. <ref> is used to address the planning failure. In particular, an external vibration generator is used to create a random disturbance to these workpieces in the container, as shown in Figure. <ref> (c). After that, the perception and behavior planning is re-tried.
Selection of different gripper tools (d): The robot is equipped with a special gripper tool that can pick up the object by parallel-jaw or air suction, as shown in Figure. <ref> (d). These two types of grasping are treated as two logical gripper tools, and the pipeline automatically determines which one to use. As shown in Figure. <ref>, this decision should be made incorporating the nodes for reaching, picking, transferring and placement. During execution, the digital output corresponding to either closing the parallel jaw or turning on the suction cup is invoked to pick up the objects.
Integration of geometric motion planner (e): Our DSL provides a flexible interface for the integration of collision-free motion planners, such as sampling <cit.> and optimization <cit.> based methods, as the motion generation primitives of the TAMP planner in Subsec. <ref>. These collision-free motion planners are accessed through the user-parameter, as detailed in Subsec. <ref>. An example is shown in Figure. <ref> (e), the shortcut algorithm in <cit.> is used to generate a smooth and efficient robot transferring movement of the picked object.
§.§ Effectiveness of the Pre-Planning Mechanism
The pre-planning mechanism proposed in Subsec. <ref> is used to interleave the node execution (e.g., waiting for RPC responses from robot services) with planning for future plan-routines. To illustrate its effectiveness, we compare the planning time with the time that the interpreter spends on waiting for the planning result. If the pre-planning mechanism successfully exploits the node execution time for planning, the waiting time should be much shorter than the planning time. The results are shown in Figure. <ref> for 10 different planning problems in user programs. Among them, 5 planning problems are from the examples in Subsec. <ref>. Each planning problem is invoked 5-30 times in the user program, and the times are the average of 20 runs of the user program.
From the result, in most cases the pre-planning mechanism can eliminate or significantly reduce the waiting time. Thus, the robot does not need to stop and waiting for online-parameters before execution. On the other hand, the pre-planning can not help when the required variables are not ready (problems 2 and 6). Moreover, if the planning time is very long (problem 10), the waiting time can not be fully eliminated.
§ LIMITATIONS AND FUTURE WORKS
Currently, the DSL mainly focuses on executing planned trajectories in an open-loop way. Reactive, closed-loop control (e.g., visual servoing) is not supported in our DSL. Moreover, our DSL assumes that the manipulated objects are (mostly) rigid. This abstraction does not work for deformable objects or more dexterous manipulation actions on rigid objects, such as the in-hand manipulation in <cit.>. Deploying these interesting manipulation skills into industrial production at scale is still challenging, and it is a promising direction for future work.
In terms of the implementation, our DSL evolved from a GUI application to simplify the deployment of the perception-guided manipulation pipeline. During the early stage of development, many concepts are unclear as we have not realized the software should be designed as a programming language, and various shortsighted design decisions have been made. The graphical user interface designed at that stage, which was already used by many customers, became historical baggage.
Thus, several features of the DSL can only be provided in an incomplete and/or unnatural way. We are re-factorizing our code base to address this issue.
Currently, user programs in our DSL are crafted by human operators. One approach to further simplify the deployment is to train Large Language Models (LLM) to generate code in our DSL. This might be fruitful as our DSL is mainly used by field application specialists without coding experience in traditional programming languages.
§ CONCLUSION
This paper contributes a DSL for deploying perception-guided robotic manipulation at scale. This DSL has an intuitive graphical frontend and is mainly used by machine operators without coding experience in Python/C++. To alleviate the users from manually making discrete decisions and/or crafting robot trajectories, we propose a novel interface that integrates a TAMP algorithm into the DSL. In particular, users craft a “skeleton” of the desired robot behavior with a set of intentionally undetermined parameters, and the planning algorithm automatically generates these missing parameters during online execution. With this interface, users can setup and solve practically important TAMP problems without understanding the TAMP algorithm or explicitly writing the planning problem description (in a modeling language like PDDL). Moreover, we propose a pre-planning interpreter to reduce the cycle time and improve the throughput of the manipulation applications. Extensive practical applications in industry demonstrate the efficacy of our method.
§.§.§ Acknowledgments
The authors would like to thank Xi Li and Lili Yang for their insightful discussion and maintenance of the infrastructure code. This work was conducted during the authors' employment at Mech-Mind Robotics. The views expressed in this paper are those of the authors themselves and are not endorsed by the supporting agencies.
apalike
|
http://arxiv.org/abs/2409.03008v1 | 20240904180225 | Cosmological constraints on anisotropic Thurston geometries | [
"Ananda Smith",
"Craig J. Copi",
"Glenn D. Starkman"
] | astro-ph.CO | [
"astro-ph.CO"
] |
Resolving Twin Jets and Twin Disks with JWST and ALMA:
The Young WL 20 Multiple System
[
=========================================================================================
§ INTRODUCTION
One of the main assumptions made in the current standard model of cosmology, ΛCDM, is that the Universe is spatially homogeneous and isotropic on large enough scales.
Known as the Cosmological Principle, this assumption restricts the large-scale geometry of the Universe to one of three types, which can be presented simultaneously in a Friedmann-Lemaître-Robertson-Walker (FLRW) metric.
Though the Cosmological Principle is well-corroborated by large-scale structure and other cosmological observables, hints of this postulate being violated have emerged in recent decades.
Among the strongest pieces of evidence for the Cosmological Principle has been the high degree of isotropy of the blackbody temperature of the cosmic microwave background (CMB) radiation, first reported by Penzias and Wilson <cit.>.
More recently, the isotropy of the statistical properties of the very small amplitude fluctuations in the temperature and polarization of the CMB has been tested <cit.>, and several violations of isotropy have been reported.
In particular, a handful of large-angle features in the observed CMB temperature <cit.> were discovered in the first WMAP data release <cit.> and were suggested to indicate significant deviation from statistical isotropy.
These features have come to be known as the “large-angle anomalies,” and their significance has persisted in subsequent Planck datasets <cit.>, with recent work arguing these anomalies jointly constitute a > 5σ violation of statistical isotropy <cit.>.
This collection of evidence warrants consideration of cosmological models that deviate slightly from the Cosmological Principle, in particular by violating spatial isotropy.
In this work, we investigate the consequences of breaking spatial isotropy through geometry – i.e., by equipping the Universe with a background metric that is homogeneous but anisotropic. Relaxing the requirement of isotropy allows the large-scale geometry of the Universe to be of a slightly more general class than the three FLRW geometries.
Historically, work on the cosmology of anisotropic spaces has centered around the Bianchi models (see <cit.> for review).
These are 3+1 spacetimes whose homogeneous spatial part corresponds to a 3-dimensional real Lie algebra, falling into one of eleven types within a classification devised by Bianchi in 1898 <cit.>.
A subset of the Bianchi models have been invoked as potential explanations of CMB anomalies <cit.>, but their deviation from isotropy is strongly constrained.
This is because many Bianchi spaces require anisotropic expansion – i.e., multiple scale factors – if they are sourced by perfect fluid stress energies.
This anisotropic expansion leaves potentially detectable imprints on cosmological observables, often characterized by the resulting “shear.”
For instance, in the subset of Bianchi spaces that explicitly contain the three FLRW metrics as special cases, anisotropic expansion is strongly constrained by the anisotropies it induces in the CMB temperature <cit.>.
Constraints have also been derived from the imprint of shear on nucleosynthesis <cit.>.
This work considers a more recently developed class of homogeneous but not necessarily isotropic geometries known as the Thurston geometries.
These eight model geometries exhaust the possible local geometries of closed homogeneous 3-manifolds according to Thurston's geometrization theorem <cit.>.
Thurston's classification schema is analogous to Bianchi's, and all but one of the Thurston geometries fall within a Bianchi group <cit.>.
Out of the eight Thurston geometries, three are the open, closed, and flat FLRW geometries, and the remaining five are anisotropic.
These latter subset of spaces remain largely unconstrained within a cosmological context (although see <cit.>).
The dynamics and distance measures of spacetimes with Thurston geometries as spatial parts, which we will dub Thurston spacetimes, have recently been investigated <cit.>, albeit when equipped with a single scale factor.
This requires invoking an anisotropic fluid fine-tuned to prohibit anisotropic expansion.
The evolution of scale factors under isotropic dust and cosmological constant is provided for each Thurston spacetime in <cit.>, but no accompanying constraints on curvature scales are derived.
In this paper, we provide strong constraints on the curvature of all five anisotropic Thurston spacetimes for the first time.
After providing representations of each anisotropic Thurston geometry in Section <ref>, we review the dynamics of the corresponding spacetimes under the presence of perfect fluid dust and cosmological constant in Section <ref>, where we find anisotropic expansion is required.
Consequently, when coupled with underlying geometric anisotropy, we find that the local flux of CMB photons is distorted, and a present-day observer interprets this flux as that of a blackbody with a non-uniform temperature.
The amplitudes of the CMB temperature fluctuations induced in these geometries are coupled to the curvature parameter , which is therefore strongly constrained by the isotropy of the CMB.
We derive this constraint for each of the five anisotropic Thurston spacetimes in Sections <ref>–<ref>, finding that || ≲ 10^-5 in all five geometries, with two even requiring || ≲ 10^-10.
The GitHub repository associated with this study is publicly available at <https://github.com/cwru-pat/ThurstonGeometry>.
Codes will be deposited there as publicly usable versions become available.
§ THURSTON GEOMETRIES
In 1982 Thurston conjectured <cit.> (and Perelman later proved <cit.>)
The interior of every compact 3-manifold has a canonical decomposition into pieces which have geometric structures.
In practice, this reduces to a set of eight local three-geometries – three being the well-known and well-studied isotropic FLRW geometries: flat (ℝ^3), spherical (S^3), and hyperbolic (ℍ^3).
The remaining five anisotropic local three-geometries are less well-known and will be discussed below.
Though the conjecture allows for a decomposition of the full space, in this work we restrict to the case where the observable Universe has just one of these eight local geometries.
Without loss of generality, we take the topology to be the covering space of that geometry.
We refer curious readers to the growing body of work on detecting nontrivial topology in the Universe, e.g., <cit.>.
In the remainder of this section, we present metrics for each of the five anisotropic Thurston geometries.
We adopt the representations of <cit.>, where the parameter κ found in the spatial part of each metric distinguishes between positive (κ > 0) and negative (κ < 0) spatial curvature.
All of these local geometries can be made arbitrarily close to flat space in some finite neighborhood by taking κ sufficiently close to zero.
§.§ and
The first two anisotropic geometries in Thurston's classification system are and .
In the literature, spacetimes equipped with spatial geometries are often referred to as Kantowski-Sachs spaces <cit.>, and falls under type III of the Bianchi classification.
and are the most straightforward to understand among the five geometries we are considering: admits positive curvature (κ > 0) along two spatial directions and zero curvature along one, while is analogous but with its curved directions having negative curvature (κ < 0).
This description leads naturally to a spatial metric separated into a hyperspherical part and a flat part:
Σ_3^2 = χ^2 + S_κ (χ)^2 ϕ^2 + z^2,
where χ∈ [0, ∞), ϕ∈ [0, 2π), z ∈, and
S_κ(χ) =
sin(χ√(κ) )/√(κ), κ > 0,
sinh(χ√(-κ))/√(-κ), κ < 0.
§.§
The next anisotropic Thurston geometry is , the universal cover of U(ℍ^2), the unit tangent bundle of the hyperbolic plane, and falls within Bianchi types III and VIII.
We adopt the spatial metric derived in <cit.>,
Σ_3^2 = x^2 + cosh(2x√(-κ)) y^2 + z^2 + 2sinh(x√(-κ)) y z,
where κ < 0 and x, y, z ∈.
This geometry is often referred to as SL(2, ℝ) in the literature since this space is diffeomorphic to .
However, these two spaces are not isomorphic so they are not interchangeable in a physical context.
§.§ Nil
The next anisotropic Thurston geometry is Nil, the geometry of the Heisenberg group.
It can be thought of as twisted E^2 × and falls within Bianchi type II.
It can be represented with the spatial metric
Σ_3^2 = x^2 + (1+κ x^2) y^2 + z^2 - 2x√(-κ) y z,
where x, y, z ∈.
§.§ Solv
The final anisotropic Thurston geometry is Solv, the geometry of solvable Lie groups.
The Solv geometry falls within Bianchi type VI_0, and can be represented with the spatial metric
Σ_3^2 = ^2z √(-κ) x^2 + ^-2z√(-κ) y^2 + z^2,
where x, y, z ∈.
§ EVOLUTION OF ANISOTROPIC THURSTON SPACETIMES
According to general relativity, the evolution of any spacetime is governed by the stress energy content of the universe through Einstein's field equations
G^μ_ν = 8π G T^μ_ν,
where our choice of c = 1 units persists in all subsequent calculations.
Thus the evolution of spacetimes with anisotropic Thurston spatial geometries is sensitive to the choice of stress energy.
For example, it is possible to equip an anisotropic spacetime with a single scale factor, say a(t), in what are commonly referred to as “shear-free” models in the literature <cit.>.
The stress energy tensor characteristic of the fluid required to achieve this scales like a^-2 and contains off-diagonal elements that are chosen precisely to prevent anisotropic expansion.
The fine-tuning required for this cancellation to occur makes this approach less compelling from a theoretical perspective and requires the introduction of non-standard sources of stress energy.
Following <cit.>, in this work we take the opposite approach of using known sources of isotropic stress energy and allowing anisotropic expansion.
Here our isotropic stress energy contains perfect fluids in the form of dust (pressureless with energy density ρ) and a cosmological constant (with Λ = 8π Gρ_Λ and p_Λ = -ρ_Λ).
With this content the stress-energy tensor takes the form
T^μ_ν = ρ u^μ u_ν - Λ/8 π Gδ^μ_ν,
where u^μ is the 4-velocity of the fluid.
Working in the co-moving frame where u^μ = (1, 0,0,0) yields the diagonal stress energy tensor commonly studied in ΛCDM cosmology
T^μ_ν = (-ρ - Λ/8 π G, - Λ/8 π G, - Λ/8 π G, - Λ/8 π G) .
In general, the spatial metric of the Thurston geometries has the form
Σ_3^2 = γ_i j x^i x^j,
so that the spacetime metric is given by
s^2 = - t^2 + γ_k ℓα^k_iα^ℓ_j x^i x^j,
where we allow for (potentially independent) scale factors through the diagonal matrix
α^k_i = (a_1(t), a_2(t), a_3(t))^k_i.
With anisotropic expansion introduced, the Einstein field equations for the stress energy tensor (<ref>) are quite similar among the five anisotropic Thurston geometries. The diagonal elements of their Einstein field equations all have the form
^0_0 : ȧ_1ȧ_2/a_1 a_2 + ȧ_2ȧ_3/a_2 a_3 + ȧ_3ȧ_1/a_3 a_1 + Δ^0_0 = Λ+ k^(0)κ/a_dom^2 + 8 πG ρ,
^1_1 : ä_2/a_2 + ä_3/a_3 + ȧ_2ȧ_3/a_2 a_3 + Δ^1_1 = Λ+ k^(1)κ/a_dom^2,
^2_2 : ä_3/a_3 + ä_1/a_1 + ȧ_3ȧ_1/a_3 a_1 + Δ^2_2 = Λ+ k^(2)κ/a_dom^2,
^3_3 : ä_1/a_1 + ä_̈2̈/a_2 + ȧ_1ȧ_2/a_1 a_2 + Δ^3_3 = Λ+ k^(3)κ/a_dom^2.
The dependence of (<ref>) on the choice of geometry lies in the term proportional to κ, where the constants k^(μ) and accompanying scale factor a_dom are listed in Table <ref>, and in extra contributions Δ^μ_ν to the Einstein tensor.
We define Δ^μ_ν to include all of the off-diagonal elements of the Einstein tensor, meaning the accompanying off-diagonal field equations are, given our insistence on a diagonal stress-energy tensor, simply
Δ^μ_ν = 0.
The evolution of the scale factors are constrained by (<ref>), since the off-diagonal elements of Δ^μ_ν will not necessarily be zero if all of the scale factors evolve independently.
We will show that the solutions to the diagonal field equations (<ref>) equate certain pairs of scale factors in a way that satisfies (<ref>) in the limit that the spatial curvature is small, which is mandated by the Universe being approximately flat on large scales.
To this end, we may define an average scale factor as the geometric mean
A(t) ≡[ a_1(t) a_2(t) a_3(t) ]^1/3,
the Hubble parameter associated with it
H(t) ≡Ȧ/A = 1/3( ȧ_1/a_1 + ȧ_2/a_2 + ȧ_3/a_3),
and the dimensionless curvature fraction
≡k^(0)κ/3 H_0^2,
where H_0≡ H(t_0) is the current (t=t_0) value of the Hubble expansion parameter.
More precisely, then, the small curvature limit corresponds to ||≪ 1, ensuring the terms proportional to κ on the right-hand side of (<ref>) are subdominant.
With this, taking linear combinations of the spatial equations in <ref>, and following <cit.> we can expand the individual scale factors in powers to find
a_i(t) = A(t) [ 1 + K^(i) F(t) + (^2) ],
where
K^(1) ≡k^(2) + k^(3) - 2 k^(1)/k^(0) ,
K^(2) ≡k^(3) + k^(1) - 2 k^(2)/k^(0) ,
K^(3) ≡k^(1) + k^(2) - 2 k^(3)/k^(0)
are given in <ref> and
F[A(t)] ≡2/5Ω_m∫_A(t_0)^A(t) (1/2, 5/6; 11/6; -a'^3 Ω_Λ/Ω_m)/√(1 + a'^3 Ω_Λ/Ω_m) a' .
Here is the hypergeometric function, the integral is from the time t_0, and we have made the standard definitions
Ω_m ≡8π G ρ(t_0)/3 H_0^2 Ω_Λ≡Λ/3 H_0^2.
While these same solutions were obtained in <cit.> and we have followed a similar procedure, there is an important difference.
Namely, in that work, Δ^μ_ν was set to zero by equating scale factors as necessary in each geometry before solving the field equations.
In U(ℍ^2) and Nil, this meant imposing the restriction a_1 = a_2 = a_3 and a_2 = a_3, respectively.
However, this is inconsistent with (<ref>) given the values of K^(i) in <ref> which indicates that a_1 = a_2 ≠ a_3 to order in all geometries.
To avoid this contradiction we instead use (<ref>) along with the K^(i) from <ref> to determine which scale factors to equate.
With these solutions, we find that Δ^μ _ν still vanishes to order across all geometries.
In , , and Solv, this is because the only non-trivial entries of Δ^μ _ν exhibit the proportionality
Δ^0_1 = -a_1^2 Δ^1_0∝(ȧ_̇1̇/a_1 - ȧ_̇2̇/a_2),
which vanishes by the equivalence of a_1 and a_2 to this order.
In the remaining two geometries, and Nil, the entries of Δ^μ_ν are less obviously zero to working order and take the form
Δ^μ _ν = √()f(a,b) .
Here f(a,b) contains differences of scale factors a_1 = a_2 ≡ a and a_3 ≡ b.
Thus f must be at least (√()) since it must disappear in the flat limit, i.e., when → 0.
In practice f is found to be () as shown in Appendix <ref>, where more detailed expressions for (<ref>) are listed.
We proceed knowing that we may neglect Δ^μ _ν to working order in .
Turning to the evolution of the average scale factor A(t), we may square <ref> to find
H^2
= 1/9∑_i=1^3 ( ȧ_i/a_i)^2 + 2/9(Λ + k^(0)κ/a_dom^2 + 8π Gρ),
where we have used Δ^μ_ν=0 to the required order and have used the ^0_0 equation from (<ref>) to simplify the second term.
It follows from (<ref>) that
∑_i=1^3 ( ȧ_i/a_i)^2
= 3 H^2 + q(t) ∑_i K^(i) + (^2).
for a known function q(t).
The form of q(t) is unimportant since from (<ref>)
∑_i=1^3 K^(i) = 0 ,
independent of geometry.
Combining (<ref>) and (<ref>), we obtain the Friedmann equation
H^2 = ( Ȧ/A)^2 = H_0^2 ( Ω_m/A^3 + Ω_Λ + /A^2) + (^2).
Expanding in powers of
A(t) = A^(0)(t) + A^(1)(t) + (^2),
and substituting into (<ref>) we find the desired evolution.
As expected, the zeroth order term is the FLRW expansion factor for a matter and cosmological-constant-dominated universe
A^(0)(t) = ( Ω_m/Ω_Λ)^1/3sinh^2/3(3√(Ω_Λ)/2 H_0 t ).
We can also obtain an explicit form for A^(1)(t), however, it will not be needed in subsequent calculations.
Finally, combining these results with <ref> we arrive at the fully expanded time evolution for the scale factors
a_i(t) = A^(0)(t) + [ A^(1)(t) + K^(i) A^(0)(t) F(A^(0)(t)) ] + (^2).
We have thus demonstrated that homogeneous spacetimes with anisotropic Thurston geometries are able to evolve under a perfect fluid stress energy to 𝒪() when multiple scale factors are introduced.
The expansion, combined with geometric anisotropy, induces angular dependence in cosmological observables and distance measures.
In particular, a completely isotropic CMB at last scattering will be observed to be anisotropic by a post-last-scattering observer, such as ourselves.
In the following sections, we show that these fluctuations allow us to put strong bounds on should our Universe possess these geometries.
§ AND
We begin by constraining the curvature in spacetimes with and spatial geometries, which have the metric
s^2 = - t^2 + a(t)^2 [χ^2 + S^2_κ(χ) ϕ^2 ]+ b(t)^2 z^2,
where we use the identification a_1(t) = a_2(t) ≡ a(t) and a_3(t) ≡ b(t) in each geometry found in Section <ref>.
Recall for that
S_κ(χ) = sinh(√(-κ)χ)/-κ.
Since κ has dimensions of inverse length squared, to expand the metric we require √(-κ)χ≪ 1.
Recalling (<ref>) we have
√(-κ)χ = √(3 ) H_0 χ≪ 1.
To simplify notation here and throughout we will write all distances in units of H_0^-1.
With this, for and we have
S_κ(χ) = sin(√(-3 )χ)/√(-3), < 0,
sinh(√(3 )χ)/√(3), > 0.
The photons emitted during recombination travel along null geodesics of the spacetime, which we compute in Section <ref>.
These trajectories are warped by anisotropic curvature and spatial expansion, meaning that the local photon fluxes at recombination and today computed in Section <ref> are different.
A present-day observer interprets this photon flux as that of a blackbody with a temperature that is a function of solid angle in the sky, which we confront against the observed CMB angular power spectrum to derive bounds on in Section <ref>.
§.§ Null geodesics
We start by computing the trajectories x^μ(λ) = (t(λ), χ(λ), ϕ(λ), z(λ)), parameterized by the affine parameter λ, of photons emitted at recombination that are later detected by an observer at the present time t(λ_0) = t_0 located at the origin of our coordinate system.
Such trajectories obey the geodesic equation
^2 x^μ/λ^2 + Γ^μ_αβ x^α/λ x^β/λ = 0.
Much like spherical symmetry allows us to impose that CMB photons follow purely radial geodesics in FLRW spaces, rotational symmetry within the (χ, ϕ)-plane in and allows us to set ϕ/λ≡ϕ'(λ) = 0.
Upon doing this, the geodesic equations for the remaining spatial coordinates read
χ” + 2a'/aχ' = 1/a^2/λ( a^2 χ' ) = 0,
z” + 2b'/bz' = 1/b^2/λ( b^2 z' ) = 0,
which of course are solved by
χ'(λ) = c_χ/a(t(λ))^2
z'(λ) = c_z/b(t(λ))^2
where c_χ and c_z are constants.
By demanding the geodesics obey the null condition
g_αβ x^α/λ x^β/λ = 0,
one recovers the first order equation
t'^2 = (a χ')^2 + (b z')^2,
which can in principle be solved for t(λ).
However, this derivative expression is sufficient for the subsequent analysis based on the 4-velocities of incoming photons.
§.§ CMB photon flux
With knowledge of these geodesics, we turn our attention to the distortions induced in the CMB.
To do this, we first examine the local flux of CMB photons propagating through a point in space at the time of
recombination, i.e., on the last scattering surface,
Φ(E_r) = N_r(E_r)/Ω_r A_r t_r E_r,
where N_r (E_r) is the number of photons with energies between E_r and E_r + E_r received within a solid angle Ω_r = (cosθ_r) ϕ_r in an area A_r during a time t_r.
As these photons propagate from the last scattering surface through an expanding anisotropic geometry, they experience a redshift that will be a function of their direction of propagation, and thus as a function of the direction on the sky from which an observer receives them.
The flux of CMB photons observed today will differ from that present at recombination due to this effect.
In particular, an isotropic flux at recombination will be observed today as anisotropic.
This present-day flux can be obtained by performing a change of coordinates from A_r, t_r, and E_r at recombination to A_0, t_0, and E_0, today.
The effect on the flux is captured through the appropriate Jacobian factor, which simplifies to
Φ(E_0) = Φ(E_r) (Ω_r/Ω_0) ( A_r t_r/ A_0 t_0) ( E_r/ E_0),
where the factorization of the Jacobian can be trivially confirmed.
To express this flux in a more meaningful way, we must relate the geometric and energy elements at the time observation today with those at recombination.
First, we identify θ as the angle between an incoming photon's trajectory with respect to the z-axis as measured by a local observer.
This is obtained by projecting a photon's 3-velocity v^i(λ) = (χ'(λ), ϕ'(λ), z'(λ)) onto the unit vector pointing in the z-direction,
cosθ =
g_iz v^i/√(g_i _j v^i v^j)√(g_zz),
where g_i_j is the spatial part of the metric.
For and , this reads
cosθ = b z'/√((a χ')^2 + (b z')^2) = [ 1 + (a χ'/b z')^2]^-1/2.
since ϕ'(λ) = 0.
To evaluate this angle we first use the solutions to the spatial geodesic equations (<ref>) to write
cosθ = [ 1 + (b/a)^2 (c_χ/c_z)^2 ]^-1/2 .
From the evolution of the scale factors (<ref>) and the fact that (from <ref>) F(λ_0) = F[A^(0)(t_0)]=0, we have that a(t_0) = b(t_0) so that today
cosθ_0 = [ 1 + (c_χ/c_z)^2 ]^-1/2.
To evaluate cosθ at recombination we, use the K^(i) from <ref> to find that
(b(t_r)/a(t_r))^2 ≈(A(t_r)[ 1 + 2 F(λ_r)]/A(t_r)[ 1 - F(λ_r)])^2 ≈ 1 + 6 F(λ_r),
where ≈ denotes equivalence to the displayed order in , which varies between geometries. For and , that is to ().
Substituting this into (<ref>) and using (<ref>) to replace the constants we find
cosθ_r ≈cosθ_0 [ 1 + 3 F(λ_r) sin^2θ_0 ].
Along a radial geodesic, ϕ_r = ϕ_0, so the solid angle Jacobian factor is found to be
Ω_r/Ω_0 = (cosθ_r)/ (cosθ_0)≈ 1 - 3 F(λ_r) ( 1 - 3 cos^2θ_0 ) = 1 + 3/2 F(λ_r) ( 1 + 3 cos 2θ_0 ).
The remaining geometric factor arises from the expansion of space and is most easily written in terms of the average scale factor
A_r t_r/ A_0 t_0 = [ A(λ_r)/A(λ_0)]^3 .
The energy of a photon measured by a comoving observer with u^μ = (1,0,0,0) is
E(λ) = |u_μ x^μ/λ| = t'(λ).
From the null condition we know the time derivative (<ref>) so that
E(λ)^2 = (a χ')^2 + (b z')^2 = c_z^2/b^2[ 1 + (b/a)^2 ( c_χ/c_z)^2 ] = c_z^2/b^2 cos^2θ,
where we have employed (<ref>) in the final equality.
With this, using expressions from above, and simplifying we can show that
( E(λ_r)/E(λ_0))^2 ≈[ A(λ_0)/A(λ_r)]^2 [ 1 - F(λ_r) (1 + 3cos 2θ_0) ]
Finally, this leads to
E_r/ E_0 ≈A(λ_0)/A(λ_r)[ 1 - 1/2 F(λ_r) (1 + 3cos 2θ_0) ]
≈A(λ_0)/A(λ_r)( Ω_r/Ω_0)^-1/3,
where we have used (<ref>) to write this in a more suggestive form to simplify subsequent calculations.
The CMB photon flux at the last scattering surface is that of a blackbody with a uniform temperature T_r:
Φ(E_r) = p E_r^2/exp(E_r/T_r) - 1,
where p≡1/(2π^2) in natural units (ħ=c=k=1).
To determine the photon flux today we define the temperature today using the average scale factor as
T_0/T_r = A(λ_r)/A(λ_0) .
Putting all of this together the flux today (<ref>) becomes
Φ(E_0) = p E_0^2/exp(E_r/T_r ) - 1(Ω_r/Ω_0) ( A_r t_r/ A_0 t_0) ( E_r^2 E_r/E_0^2 E_0).
Perhaps surprisingly the energy ratio and the Jacobian factors completely cancel, i.e., from (<ref>), (<ref>), and (<ref>) we see that
(Ω_r/Ω_0) ( A_r t_r/ A_0 t_0) ( E_r^2 E_r/E_0^2 E_0) ≈ 1.
This means the observed photon flux today is that of a blackbody, albeit with a direction-dependent temperature –
using (<ref>) and (<ref>),
E_r/T_r = E_0/T_0( E_r/E_0) ( T_0/T_r) ≈E_0/T_0[ 1 - 1/2 F(λ_r) (1 + 3 cos 2θ_0) ] .
Thus the photon flux today in and is given by
Φ(E_0) ≈p E_0^2/exp[E_0/T(Ω_0)] - 1,
where
T(Ω_0) ≈ T_0 [ 1 + 1/2 F(λ_r) (1 + 3 cos 2θ_0) ] .
§.§ constraint
The angular dependence of T(Ω_0) means that an isotropic temperature at recombination appears anisotropic to an observer today.
It is therefore convenient to expand T(Ω_0) in spherical harmonics.
For (<ref>) this is straightforward since the angular dependence of the perturbation around the mean T_0 is manifestly
Δ T(Ω_0) ≡ T(Ω_0) - T_0 ≈ a_20 Y_20(Ω_0),
where
a_20≈
4 √(π/5) T_0 F(λ_r) .
For Ω_m=0.3, Ω_Λ=0.7, A^(0)(λ_0)=1, A^(0)(λ_r)≡ 1 / (1+z_r), and z_r = 1090 we find F(λ_r) ≈ -1.04 from direct integration of (<ref>).
The fluctuations Δ T(Ω)/T_0 that would be observed today are plotted in <ref> in units of .
The sign of the quadrupole amplitude is opposite in (< 0) and (> 0).
We also note that taking the Λ→ 0 limit of this result (i.e., in (<ref>)), changes the form of F(λ_r) and yields the expression for the observed CMB temperature in matter-dominated Bianchi III spacetimes derived in <cit.>.
CMB temperature anisotropies have been measured on large scales (most reliably by the WMAP <cit.> and Planck <cit.> teams).
To be consistent with these observations the expansion-induced anisotropy must be sufficiently small.
The CMB power spectrum is defined as
D_ℓ≡ℓ(ℓ+1)/2π C_ℓ C_ℓ = 1/2ℓ + 1∑_m=-ℓ^ℓ |a_ℓ m|^2.
In this case, only D_2 is non-zero,
D_2 ≈48/25 F(λ_r)^2 ^2 T_0^2.
The observed D_2 reported in the most recent Planck data release <cit.> is
D_2^obs = 225.9 μ K^2.
A limit is obtained on by ensuring that the power in the induced quadrupole does not exceed that of the observed one,
|| ≲5/4 √(3) |F(λ_r)|√(D_2^obs)/T_0≈ 3.8 × 10^-6
for and spacetimes where we have chosen T_0=2.726K <cit.>.
This is considerably more stringent than the = 0.001 ± 0.002 bound reported by Planck <cit.> for the isotropic FLRW spaces.
§ AND NIL
In the following sections, we will repeat the procedure outlined in Section <ref> for the remaining three anisotropic Thurston spacetimes, starting with the spacetime of .
This section also contains the constraint on in Nil since its metric is extremely similar to that of to the order at which we work.
§.§
As in the previous section, the metric for can be written in terms of the dimensionless parameter .
Using <ref> we have
-κ = -3/k^(0) H_0^2 = 12/5 H_0^2 .
Writing all coordinates in units of H_0^-1 we find
s^2 = - t^2 + a(t)^2 x^2 +
+ b(t) [ a(t)cosh(4 √(3/5)√() x ) y^2 + b(t) z^2 + 2 a(t) sinh(2√(3/5)√() x ) y z ].
where we have used the identification a_1(t) = a_2 ≡ a(t) and a_3(t) ≡ b(t).
This identification ensures consistency with an isotropic stress energy to order √().
Expanding the metric to this order and using (<ref>) we see that the scale factors can be replaced with A^(0)(t) leading to
s^2 ≈ - t^2 + A^(0)(t)^2 [ x^2 + y^2 + z^2 + 4 √(3/5)√() x y z],
where now ≈ denotes equivalence to order √().
The equations for the null geodesics of this spacetime to order √() are
1/A^(0)(t)^2/λ[A^(0)(t)^2 x'(λ)] - 2√(3/5)√() y'(λ) z'(λ) ≈ 0,
1/A^(0)(t)^2/λ[A^(0)(t)^2 y'(λ)] + 2√(3/5)√() z'(λ) x'(λ) ≈ 0,
1/A^(0)(t)^2/λ[A^(0)(t)^2 z'(λ)] + 2√(3/5)√() x'(λ) y'(λ) ≈ 0,
accompanied by a null condition (<ref>)
t'(λ)^2 ≈ A^(0)(t)^2 [x'(λ)^2 + y'(λ)^2 + z'(λ)^2 + 4√(3/5)√()
x(λ) y'(λ)z'(λ)] .
The first order solutions in √() to (<ref>–<ref>) can be directly integrated from
x'(λ) ≈1/A^(0)(λ)^2[c_x + 2√(3/5) c_y c_z √() S(λ) ],
y'(λ) ≈1/A^(0)(λ)^2[c_y - 2√(3/5) c_x c_z √() S(λ) ],
z'(λ) ≈1/A^(0)(λ)^2[c_z - 2√(3/5) c_x c_y √() S(λ) ],
t'(λ) ≈1/A^(0)(λ),
where
v(λ_0) = 1 requires c_x^2+c_y^2+c_z^2=1, A^(0)(λ) ≡ A^(0)(t(λ)),
S(λ) ≡ H_0 ∫_λ_0^λλ'/A^(0)(λ')^2 = ∫_1^A^(0)(λ) a'/a'^2 √(Ω_m / a'^3 + Ω_Λ)
= 1/√(Ω_Λ)[ (1/3, 1/2; 4/3; -Ω_m/Ω_Λ) - 1/A^(0)(λ) (1/3, 1/2; 4/3; -Ω_m/A^(0)(λ)^3 Ω_Λ)],
and we have chosen A^(0)(λ_0)=1.
To compute the local photon flux (<ref>) received by a present-day observer at the origin, we need to define local angular coordinates relating to solid angles in the sky.
For this geometry, we again identify θ as the angle from the z-axis, permitting the use of (<ref>) from the previous section.
Writing the photon 3-velocity as v^i(λ) = (x'(λ), y'(λ), z'(λ)) and expanding we find
cosθ≈z'(λ) [ x'(λ)^2 + y'(λ)^2 + z'(λ)^2 ] + 2√(3/5)√() x(λ) y'(λ) [ x'(λ)^2 + y'(λ)^2 ]/[ x'(λ)^2 + y'(λ)^2 + z'(λ)^2 ]^3/2.
The velocities are given in (<ref>) and the x equation integrates to
x(λ) ≈ c_x S(λ).
Plugging all of this in we find the very simple expression
cosθ≈c_z/√(c_x^2 + c_y^2 + c_z^2),
which is a constant.
In other words, photons propagate along lines of constant θ so that cosθ_0 ≈cosθ_r.
Unlike the previous geometries, however, the polar angle ϕ measured at recombination and today will differ since lacks rotational symmetry in its xy-plane.
We choose to identify ϕ as the angle from the x-axis within the xy-plane, meaning that
tanϕ = g_i_yv^i /g_i_xv^i√(g_x_x/g_y_y).
Proceeding as above, expanding, and plugging in the velocities from (<ref>) we find
tanϕ≈y'(λ)/x'(λ) + 2 √(3/5)√() x(λ) z'(λ)/x'(λ)≈c_y/c_x - 2√(3/5)√()( c_y/c_x)^2 c_z S(λ).
Evaluating (<ref>) and (<ref>) at λ_0 where S(λ_0)=0 we find that these constants take the simple form of spherical coordinates:
(c_x, c_y, c_z) = (sinθ_0 cosϕ_0, sinθ_0 sinϕ_0, cosθ_0).
Evaluating at recombination gives
cosθ_r ≈cosθ_0,
tanϕ_r ≈tanϕ_0 ( 1 - 2 √(3/5)√() S(λ_r) cosθ_0 tanϕ_0 ),
from which we determine the Jacobian factor
Ω_r/Ω_0≈ 1 - 2 √(3/5)√() S(λ_r) cosθ_0 sin 2ϕ_0.
From (<ref>), we know that the ratio of photon energies measured by observers today and at recombination is trivially
E(λ_r)/E(λ_0)≈A^(0)(λ_0)/A^(0)(λ_r)
since spatial expansion is isotropic to order √().
Similarly,
A_r t_r/ A_0 t_0≈[A^(0)(λ_r)/A^(0)(λ_0)]^3.
Thus the photon flux measured by a present-day observer only receives nontrivial modifications from (<ref>) and can be written as
Φ(E_0) ≈p E_0^2(1 - 2 √(3/5)√()S(λ_r) cosθ_0 sin 2ϕ_0 )/exp(E_0 / T_0) - 1
using (<ref>).
This flux is qualitatively different than the flux (<ref>) for and .
Namely, an O(√()) correction appears as a direction-dependent but energy-independent factor, i.e., as a greybody factor to the entire expression, rather than as a direction-dependent factor on the temperature.
The exponential term in the denominator is unmodified to this order.
In this case, we cannot simply read off the observed CMB temperature from the exponent, and an effective temperature may instead be obtained from the total observed intensity I integrated over all energies:
I = ∫_0^∞ E_0 E_0 Φ(E_0).
For a perfect blackbody of temperature T, this evaluates to
I = pπ^4/15T^4,
which is just the Stefan-Boltzmann law in our choice of units.
With the anisotropic flux (<ref>), the total observed intensity is
I ≈pπ^4/15T_0^4 (1 - 2 √(3/5)√() S(λ_r) cosθ_0 sin 2ϕ_0 ) .
Comparing (<ref>) and (<ref>), the effective CMB temperature can be identified as
T(Ω_0) ≈ T_0(1 - 1/2√(3/5)√() S(λ_r) cosθ_0 sin 2ϕ_0 )
to order √().
For Ω_m = 0.3, Ω_Λ = 0.7, A_0(λ_r) = 1 / (1+z_r), and z_r=1090 numerical evaluation of (<ref>) gives S(λ_r) ≈ -3.19.
As in Section <ref>, we can decompose the observed temperature into spherical harmonics to constrain .
This decomposition for (<ref>) is more involved than in the previous section because the induced fluctuations cannot be represented with finitely many harmonics.
We therefore save the details of this decomposition for <ref>, and quote the results:
T(Ω) ≈ T_0 { 1 + √(3π/5)√() S(λ_r) ∑_ℓ = 3
ℓ odd^∞√(2ℓ+1)√(%s/%s)(ℓ - 2)!(ℓ + 2)![ Y_ℓ 2(Ω) - Y_ℓ -2(Ω) ] }.
The corresponding normalized temperature fluctuations, Δ T(Ω_0), are shown in Fig. <ref> in units of √().
The resulting octopole has the largest amplitude of the induced modes, and the constraint on that follows from the induced octopole is
≲50/3 S(λ_r)^2D_3^obs/T_0^2≈ 2 × 10^-10
where we have used D_3^obs =936.9μ K^2 again from Planck's best-fit angular power spectrum <cit.>.
Note that a much stronger constraint is reached here than for and because modifications to the photon flux are introduced at 𝒪(√()) instead of 𝒪().
The astute reader will note that the induced temperature fluctuations extend to arbitrarily high ℓ, with D_ℓ∝ℓ^-1, whereas the conventional damping tail of CMB temperature falls as exp(-ℓ^2/ℓ_D^2) (for some cosmological-parameter dependent value of ℓ_D).
This implies that, above some threshold ℓ, the induced D_ℓ will exceed the usual predicted values, and ever-stricter limits on will result form measurements of the CMB above that threshold.
For the Plank best-fit FLRW cosmological parameters, that threshold is at ℓ>4000, and thus somewhat above the highest ℓ that have been measured.
§.§ Nil
The constraint on curvature of the Nil spacetime largely mirrors that of .
Here
-κ = -3/k^(0) H_0^2 = 12 H_0^2 ,
a factor of 5 larger than in , so expressing all coordinates in units of H_0^-1 the metric is
s^2 = - t^2 + a(t)^2 x^2 + b(t) [a(t) (1 + 12 x^2) y^2 + b(t) z^2 - 4√(3)a(t) √() x y z ].
Though this differs from the metric <ref> for , expanding to order √()
s^2 ≈ - t^2 + A^(0)(t)^2 [ x^2 + y^2 + z^2 - 4√(3)√() x y z] .
This is the same as the O(√()) metric <ref> for , with the off-diagonal term flipped in sign and multiplied by √(5).
It is unsurprising, then, that the modes induced in the observed CMB temperature strongly resembles those in ,
T(θ_0, ϕ_0) ≈ T_0[1 - √(5)√() S(λ_r) ∑_ℓ = 3
ℓ odd^∞α_ℓ (Y_ℓ 2 - Y_ℓ -2) ].
The constraint in the Nil spacetime is
≲10/3 S(λ_r)^2D_3^obs/T_0^2≈ 4 × 10^-11,
which is stronger than the constraint in by the expected factor of 5, the ratio of the k^(0)'s between the two spacetimes.
§ SOLV
The last anisotropic Thurston spacetime is Solv with metric given by
s^2 = - t^2 + a(t)^2 [ ^2z√(-κ) x^2 + ^-2z√(-κ) y^2 ] + b(t)^2 z^2,
where we have identified a_1(t) = a_2(t) ≡ a(t) and a_3(t) = b(t).
As in the other geometries, we rewrite κ in terms of and write the coordinates in units of H_0^-1 to express the metric as
s^2 = - t^2 + a(t)^2 [ ^2√(3) z √() x^2 + ^-2√(3) z √() y^2 ] + b(t)^2 z^2,
As in and Nil, appears with half-integer powers in the metric.
It is natural to guess, then, that modifications to the CMB photon flux will appear at 𝒪(√()), but this is not the case.
To 𝒪(√()), Solv experiences isotropic expansion, and no distortions to the local photon flux are induced.
Therefore, to see effects in Solv it proves necessary to work to ().
Also in contrast to and Nil general progress can be made without immediately expanding in powers of √().
The geodesics of (<ref>) are given by
1/a(t)^2/λ[ a(t)^2 x'(λ)] + 2√(3)x'(λ)z'(λ) = 0,
1/a(t)^2/λ[ a(t)^2 y'(λ)] - 2√(3)y'(λ)z'(λ) = 0,
1/b(t)^2/λ[ b(t)^2 z'(λ)] - √(3)( a(t)/b(t))^2 [^2√(3) z(λ) √() x'(λ)^2-^-2√(3) z(λ) √()y'(λ)^2] = 0,
along with a null condition (<ref>)
t'(λ)^2 = a(t)^2 [^2√(3) z(λ) √() x'(λ)^2 + ^-2√(3) z(λ) √() y'(λ)^2] + b(t)^2 z'(λ)^2.
The spatial equations are solved exactly to give velocities along null geodesics
x'(λ) = c_x ^-2√(3) z(λ) √()/a(λ)^2,
y'(λ) = c_y ^2√(3) z(λ) √()/a(λ)^2,
z'(λ) = c_z/b(λ)^2 + √(3)c_x x(λ) - c_y y(λ)/b(λ)^2,
which can be substituted into (<ref>) to find t'(λ).
In deriving the CMB photon flux today, we make the same definitions of angular coordinates (θ,ϕ) as above, permitting the use of (<ref>) and (<ref>).
Using these expression we find that θ_r and ϕ_r depend on θ_0 and ϕ_0 as
cosθ_r ≈cosθ_0 + √(3) S(λ_r) sin^2θ_0 cos 2ϕ_0 - 3 [ 2 F(λ_r) + S(λ_r)^2 ] cosθ_0 sin^2θ_0 ,
tanϕ_r ≈tanϕ_0 [ 1 + 2√(3Ω_k) S(λ_r) cosθ_0 + 3 S(λ_r)^2 ( 2 cos^2θ_0 + sin^2θ_0 cos 2ϕ_0 ) ] ,
where ≈ denotes equivalence to (), F(λ) is defined in (<ref>), S(λ) is defined in (<ref>), and we have used the fact that
H_0 ∫_λ_0^λS(λ') dλ'/A^(0)(λ')^2 = 1/2 S(λ)^2.
The determinant of the Jacobian matrix gives
Ω_r/Ω_0≈ 1 + 3 (1 + 3cos 2θ_0) F(λ_r) .
As noted above the leading order correction is ().
The remaining geometric factor is
A_r t_r/ A_0 t_0 = a(t_r)^2 b(t_r)/a(t_0)^2 b(t_0)≈(A^(0)(λ_r)/A^(0)(λ_0))^3 [ 1 + 3 A^(1)(λ_r)/A^(0)(λ_r)] .
Similarly the energy factor that appears is
E_r^2 E_r/E_0^2 E_0≈(A^(0)(λ_0)/A^(0)(λ_r))^3 { 1 - 3 [ (1 + 3 cos 2θ_0) F(λ_r) + A^(1)(λ_r)/A^(0)(λ_r)] }.
Finally, combining the factors (<ref>), (<ref>), and (<ref>), we find they all cancel in the same, perhaps surprising manner as in Section <ref>.
This leads to the flux
Φ(E_0) ≈p E_0^2/exp(E_r/T_0) - 1,
where
T(Ω_0) ≈ T_0[1+ (1+3cos 2θ) F(λ_r)].
From this we can write the temperature anisotropy induced in Solv as
Δ T(Ω_0) ≡ T(Ω_0) - T_0 ≈ T_0 (1 + 3cos2θ)F(λ_r),
twice that found in <ref> for .
Decomposed into spherical harmonics we again see only a Y_20(Ω) contribution
Δ T(Ω)/ T_0≈ 8 √(π/5)Ω_K F(λ_r)Y_20(Ω_0),
which is plotted in <ref> in units of .
For z_r=1090, Ω_m=0.3, and Ω_Λ=0.7 we have seen that F(λ_r) ≈ -1.04 and S(λ_r) ≈ -3.19.
Finally, the bound on is a factor of 2 stronger than that found for ,
≲5/8√(3) |F(λ_r)|√(D_2^obs)/T_0≈ 1.9× 10^-6.
§ CONCLUSION
In this paper, we have, for the first time, derived powerful constraints on in spacetimes with homogeneous anisotropic spatial geometries of all five corresponding Thurston types – S^2 ×, ℍ^2 ×, U(ℍ^2), Nil, and Solv – when filled with standard homogeneous perfect-fluid dust and cosmological constant Λ.
We have shown that, in S^2 × and ℍ^2 ×, CMB photons undergo direction-dependent redshift as they propagate from recombination through anisotropically expanding space.
In U(ℍ^2) and Nil, the underlying anisotropy of the three-geometry distorts the present-day local flux of CMB photons.
An observer could interpret these effects as a blackbody with an anisotropic temperature, with an amplitude of anisotropy proportional to a power of √(||).
We find that, in order for these temperature fluctuations to be no larger than the observed CMB anisotropies, || ≲ 10^-5 in and ,
≲ 10^-6 in Solv, and ≲ 10^-10 in and Nil.
We have not commented on the impact of primordial perturbations on the observed CMB temperature.
Though it is theoretically possible for this anisotropy to be exactly canceled out by temperature fluctuations existing at the time of last scattering, it is highly unlikely that we live just at the moment when such cancellation occurs.
We emphasize that these constraints are valid for a Universe filled with perfect fluid dust and cosmological constant, but do not necessarily hold for less conventional stress-energy contents.
For instance, it has been shown that the CMB temperature maintains its isotropy in certain shear-free models sourced by a finely-tuned anisotropic fluid <cit.>.
We note however that the angular diameter and luminosity distances are still direction-dependent in the shear-free renditions of anisotropic Thurston spacetimes <cit.>, so it still may be possible to constrain anisotropy less stringently using other cosmological observables (e.g., see <cit.>).
Our powerful constraints come from comparison of observed CMB temperature fluctuations with
the direction-dependent photon flux that results from the evolution of an isotropic homogeneous blackbody distribution at the time of recombination into an anisotropic photon distribution today.
These constraints are therefore independent of the physics responsible for the usual primordial fluctuations.
We thank Yashar Akrami for early conversations identifying the questions addressed in this paper, and thank Johanna Nagy and John Ruhl for useful conversations about CMB measurements.
A.F.S. was partially supported by NASA ATP Grant RES240737;
G.D.S. by DOE grant DESC0009946.
§ Δ MU NU IN AND NIL
Here we list out the entries of Δ^μ _ν in and Nil.
For μ, ν∈{0,1}
Δ^0_0 = -Δ^1_1 = -x^23/4k^(0)[ȧ/a - ḃ/b ]^2 + (^2)
Δ^0_1 = -a^2Δ^1_0 = - x 3/2 k^(0)[ȧ/a -ḃ/b ] + (^2),
where we have set a_1 = a_2 = a and a_3 = b.
The remaining nontrivial elements of Δ^μ _ν are
Δ^2_2 = -x^23 /4k^(0) [(ȧ/a)^2 -(ḃ/b)^2 + 2ä/a - 2b̈/b ] + (^2)
Δ^2_3 = ζx√(-3/4k^(0))(b/a ) [ - ȧḃ/ab + (ḃ/b)^2 - ä/a + b̈/b ] + (^3/2)
Δ^3_2 = ζx√(-3/4k^(0))(a/b ) [2(ȧ/a)^2 - 3̇ȧḃ/ab + (ḃ/b)^2 + ä/a - b̈/b ] + (^3/2)
Δ^3_3 = x^23 /4k^(0) [3(ȧ/a)^2 -4ȧḃ/ab+(ḃ/b)^2 + 2ä/a - 2b̈/b ] + (^2),
where ζ = +1 in Nil and ζ = -1 in .
From (<ref>), we know a and b differ by an order correction, meaning (<ref>) and (<ref>) are high enough order in to be neglected.
We may verify the same is true for the other nontrivial elements by using (<ref>) to rewrite (<ref>)–(<ref>) as
Δ^2_2 = -Δ^3_3 = -x^23^2/2k^0(K^(1) - K^(3))(3ȦḞ/A + F̈ ) + (^2)
Δ^2_3 = -Δ^3_2 = -ζx√(-3^3/4k^0)(K^(1) - K^(3))(3ȦḞ/A + F̈ ) + (^3/2),
where F = F[A(t)] is given by (<ref>).
§ TEMPERATURE FLUCTUATION AMPLITUDES IN AND NIL
Here we derive the spherical harmonic representation of the temperature (<ref>) for .
This derivation can easily be extended to that for Nil given their similarities at 𝒪(√()).
We begin with (<ref>) written as
T(Ω) ≈ T_0 ( 1 - 1/2√(3/5)√() S(λ_r) cosθsin 2ϕ),
where we have dropped subscripts from the angular components for brevity.
From the sin 2ϕ factor we immediately see that only harmonic modes with m=± 2 will contribute, and they will have equal amplitude and opposite sign.
Since T(Ω) is a real function we know that a_ℓ 2 = a_ℓ -2^* and thus a_ℓ 2 is imaginary.
Thus we only need to compute m=+2.
To begin,
a_ℓ 2≈ -1/2√(3/5)√() S(λ_r) α_ℓ,
with
α_ℓ≡∫cosθsin 2ϕ Y_ℓ 2^*(Ω) Ω.
Using the standard functional form
Y_ℓ 2(Ω) = √(2ℓ + 1/4 π)√((ℓ - 2)!/(ℓ + 2)!) P_ℓ^2(cosθ) ^ 2 ϕ,
and recalling that P_ℓ^m(-x)=(-1)^ℓ-mP_ℓ^m(x), we see that α_ℓ=0 for even ℓ.
Meanwhile, we will show below that
∫_-1^1 x P_ℓ^2(x) x =
4 , ℓ .
The required integral over ϕ is
∫_0^2πsin 2ϕ ^-2ϕ ϕ = -π.
Thus
α_ℓ
=
-2 √((2ℓ+1)π)√((ℓ-2)!/(ℓ+2)!)
0
Thus we find
a_ℓ 2≈√(3 (2ℓ+1) π/5)√(%s/%s)(ℓ - 2)!(ℓ + 2)! S(λ_r) √()
0
so that
Δ T(Ω)/√() T_0≈√(3π/5) S(λ_r) ∑_ℓ = 3
ℓ odd^∞√(2ℓ+1)√(%s/%s)(ℓ - 2)!(ℓ + 2)![ Y_ℓ 2(Ω) - Y_ℓ -2(Ω) ],
as claimed in (<ref>).
The integral in (<ref>) can be evaluated as follows.
From the recursion relation
x P_ℓ^2(x) = 1/2ℓ+1[ (ℓ+2) P_ℓ-1^2(x) + (ℓ-1) P_ℓ+1^2(x) ]
the required integral is
∫_-1^1 x P_ℓ^2(x) x = 1/2ℓ+1∫_-1^1[ (ℓ+2) P_ℓ-1^2(x) + (ℓ-1) P_ℓ+1^2(x) ] x,
≡1/2ℓ+1[ (ℓ+2) J_ℓ-1 + (ℓ-1) J_ℓ+1],
where we have defined for all n≥ 2
J_n ≡∫_-1^1 P_n^2(x) x.
This integral can be evaluated by first noting that from the definition of the associated Legendre functions in terms of the Legendre polynomials and Legendre's equation we have
P_n^2(x) = (1-x^2) ^2 P_n(x)/ x^2 = 2 x P_n(x)/ x - n(n+1) P_n(x).
Employing the recursion relation
x P_n(x)/ x = P_n-1(x)/ x + n P_n(x)
we thus can write
P_n^2(x) = 2 P_n-1(x)/ x - n(n-1) P_n(x).
We are left to evaluate
J_n = ∫_-1^1[ 2 P_n-1(x)/ x - n(n-1) P_n(x) ] x.
Orthogonality of the Legendre polynomials shows that the second term in the integrand integrates to zero (for n≥ 2).
The first term in the integrand is a total derivative.
Since P_n(-1) = (-1)^n and P_n(1)=1 we arrive at
J_n = 2 P_n-1(x) |_-1^1 = 2 [ 1 - (-1)^n-1] = 0, n
4, n
Finally, plugging this into (<ref>) we arrive at the result quoted in (<ref>).
JHEP
|
http://arxiv.org/abs/2409.02453v1 | 20240904051957 | FrameCorr: Adaptive, Autoencoder-based Neural Compression for Video Reconstruction in Resource and Timing Constrained Network Settings | [
"John Li",
"Shehab Sarar Ahmed",
"Deepak Nair"
] | eess.IV | [
"eess.IV",
"cs.CV",
"cs.ET",
"cs.MM"
] |
Certifying Quantum Temporal Correlation via Randomized Measurements: Theory and Experiment
Dawei Lu
==========================================================================================
2
§ ABSTRACT
Despite the growing adoption of video processing via Internet of Things (IoT) devices due to their cost-effectiveness, transmitting captured data to nearby servers poses challenges due to varying timing constraints and scarcity of network bandwidth. Existing video compression methods face difficulties in recovering compressed data when incomplete data is provided. Here, we introduce , a deep-learning based solution that utilizes previously received data to predict the missing segments of a frame, enabling the reconstruction of a frame from partially received data.
Video Transmission,
Progressive Compression,
IoT
§ INTRODUCTION
Video-enabled IoT devices provide a comprehensive view of the environment by capturing visual data alongside traditional sensor data, facilitating real-time monitoring and decision-making across various domains. However, due to resource constraints, these devices often rely on edge servers for video processing. This reliance introduces timing constraints that may require interrupting frame transmission to transition to the next frame. Consequently, edge servers frequently face the challenge of reconstructing video frames from incomplete data. Thus, there is a pressing need for an efficient method on the server side to effectively handle missing data in video frames.
One way of handling this challenge is to encode each frame of the video into a compressed form before transmission.
There has been numerous compression techniques, both classical (Huffman Coding <cit.>, JPEG <cit.>, MPEG <cit.>, H.261 <cit.>, H.263 <cit.>, H.264 <cit.>, HEVC <cit.>) and neural network based ones using multilayer perceptrons (MLPs) <cit.>, Convolutional Neural Networks (CNNs) <cit.> and AutoEncoders <cit.>, which reduce overall frame size.
However, none of these methods is explicitly designed to manage decompression with partially received data, which often becomes the only recourse when the sender is unable to transmit complete data due to shortages in time.
In one very recent method, Progressive Neural Compression (PNC) <cit.>, the authors propose a progressive encoding of images that can tolerate missing data.
However, it only applies to static images and relies on zero-filling to address missing data, a method that may result in suboptimal performance for videos due to not leveraging the inter-frame correlation between consecutive frames.
This paper presents , a deep-learning framework designed to exploit inter-frame correlation for efficiently reconstructing missing data within a frame.
Additionally, we implemented our own version of adaptive bitrate (ABR) video delivery on top of AVC to juxtapose its performance with that of , aiming to highlight differences in their methodologies. Unlike , which involves partitioning and extracting image frames and features from the videos, ABR solely adheres to the same video format without such extractions. We observed that AVC, when paired with ABR, outperforms deep learning-based methods in terms of both throughput and accuracy. Nonetheless, traditional algorithms like AVC exhibit limitations when confronted with incomplete data, rendering them unsuitable for tasks with strict timing requirements.
§ RELATED WORK
Conventional Image and Video Compression. Traditional methods of compression are primarily designed to reduce data volume necessary for image restoration. There are lossy compression techniques such as the coefficient quantization step in JPEG's 3-stage compression algorithm and lossless methods like discrete cosine transform (DCT) transformation in the same JPEG compression process <cit.>. Furthermore, JPEG has a progressive mode, allowing images to be compressed and reconstructed more flexibly based on an arbitrary amount of encoded data, where more data produces a more accurate reconstruction <cit.>.
However, such compression algorithms are only applicable to static images. Video content is addressed through other compression protocols. Low latency video coding and compression is especially relevant in IoT/Edge computing systems due to the real time deadlines that these applications generally operate under. One such video compression method is the H.264 <cit.> codec, also referred to as AVC, a version of the MPEG standard that incorporates block-based motion compensation strategies and exploits spatial and temporal repetitions. H.264 uses a combination of I-frames (Intra frames: these are complete frames that contain full image information) and P-frames (Predictive frames: these encode the differences or changes between themselves and previous reference frames) for video compression. I-frames are periodically inserted in the video stream or at scene changes to provide starting points for decoding. P-frames exploit spatial and temporal redundancies by “predicting" image content based on previously decoded frames.
Adaptive Video Compression
Adaptive video compression improves upon traditional methods by adjusting compression levels based on network conditions before transmission. A prominent example is adaptive bitrate (ABR) streaming, where video streams are compressed at various bitrates. Then, depending on the current network conditions, the appropriate bitrate-encoded videos are transmitted. Specific ABR protocols include HTTP Live Streaming (HLS) and Dynamic Adaptive Streaming over HTTP (DASH) <cit.>.
Neural Compression and Inference.
The advent of deep learning has ushered in innovative mechanisms for image compression, such as autoencoders (AEs) <cit.>, nested quantization, latent ordering <cit.>, and recurrent neural networks (RNNs) <cit.>. These methods enable variable compression, allowing for incremental refinement of image quality. Despite their superior compression efficacy, these neural models often demand significant computational resources, sometimes requiring several hours on GPU clusters during training, making them impractical for resource-limited IoT and edge devices.
Progressive Neural Compression.
Neural compression techniques have evolved to include progressive compression strategies, essential for adapting to fluctuating bandwidth conditions common in wireless sensor networks and distributed IoT applications. One such recent approach, Starfish <cit.>, introduces a method to enhance the resilience of neural compression to data transmission losses by adding random dropouts to its AE's bottleneck layer. Though Starfish does mitigate the impact of data loss, it lacks a mechanism for assessing and prioritizing the encoded features based on their importance for inference accuracy.
As aforementioned, PNC <cit.> was developed to improve classification accuracy for images within edge offloading environments, particularly when faced with temporal and bandwidth limitations. Diverging from existing methodologies, PNC dynamically adjusts to changes in bandwidth, allowing for efficient image classification by the edge server. It does so by training a multi-objective rateless autoencoder, tailored for multiple compression rates. PNC also implements a stochastic taildrop algorithm during training to form a compression solution that creates features ordered by importance in the inference process.
However, PNC is designed to work with static images, hence does not leverage the inherent correlation between video frames in the process of filling up the missing data.
§ SYSTEM MODEL
Our system architecture, as depicted in Figure <ref>, includes a resource-constrained system (e.g., an IoT device or low-power virtual machine), responsible for transmitting compressed video data. This data is then sent over a wireless network to a central edge server. At the server, a decompression algorithm—such as PNC or —is utilized to reconstruct the received compressed data (usually in bytes) into their original video or image frame formats as accurately as possible.
An essential consideration in the transmission of compressed frames is adherence to strict deadlines at the sender's end. Due to potentially limited network bandwidth, the client device may still be sending a frame when the subsequent frame becomes ready for transmission. Consequently, we enforce a deadline for each frame's transmission, requiring the transmission of the next frame commence promptly, even if a portion of the current frame remains unsent. As a result, the receiving server must be able to robustly reconstruct frames based on partially received data, a key component of .
§ METHODS
§.§ Dataset
Our dataset was derived from the UCF Sports Action dataset <cit.>, which originally comprises videos showcasing 10 distinct actions. From this dataset, we selected 8 of the 10 actions. Subsequently, we partitioned the videos from each action into training, validation, and test sets. The distribution of videos across each category is detailed in Table <ref>.
§.§ AVC
Initially, we selected the AVC/H.264 codec as our baseline for video compression due to its widespread use in numerous applications, minimal data loss, and ability to maintain the original video format without the need for additional frame conversion. However, it exhibits limited error-resilience, especially in network environments with unstable bandwidth or strict timing constraints. Furthermore, AVC mandates transmitting the entire encoded video over the network; dynamic transmission of partial video segments isn't possible.
For our experiments, we utilized the FFmpeg library with .mp4 as the video format.
§.§ ABR
To expand on the baseline H.264 method, we developed our own adaptive bitrate (ABR) video transmission implementation, which utilizes H.264 as the base encoding method and encodes content at various bitrates, specifically tweaking the control rate factor (CRF). In the FFmpeg library, a lower CRF (e.g. CRF=18) denotes a higher bitrate while a higher CRF (e.g. CRF=30) indicates a lower bitrate.
§.§ PNC
PNC, as described in <cit.>, is a progressive encoding framework primarily designed for images. The PNC model undergoes a two-step training process. Initially, an autoencoder is trained to reconstruct images with high fidelity. Subsequently, the autoencoder is fine-tuned to optimize the accuracy of an image classifier using the reconstructed images. Throughout the training process, a stochastic tail-drop technique is employed to enhance the autoencoder's ability to reconstruct images from partially received data. In this technique, missing data is padded with zeros, ensuring that the decoder receives a fixed-size vector.
In our context, our focus is precisely on the reconstruction of video frames. Thus, we tailor the training of the autoencoder to prioritize minimizing the reconstruction error of these frames. To formalize this, let x_i denote the i^th captured frame, and E(x_i)=c_i represent the encoded data of x_i, computed by the IoT device. These encoded representations, denoted as c_i, are transmitted over the network. However, due to timing constraints, the sender may opt to switch to encoding and sending the next frame, thereby interrupting the transmission of the current frame. Consequently, the received data for the i^th frame is denoted as ċ_i. Upon receiving ċ_i, the receiver zero-pads it to match the dimension of c_i, resulting in ĉ_i. Subsequently, ĉ_i is passed through the decoder to reconstruct the frame, denoted as x̂_i. PNC is trained to minimize the mean squared error (MSE) between the original frame (x_i) and its reconstructed counterpart (x̂_i).
In conclusion, although consecutive frames within a video usually exhibit correlation, PNC, being originally designed for images, does not leverage this correlation. Consequently, missing data is filled with zeros during reconstruction.
§.§
As previously mentioned, given the inherent correlation present in consecutive frames, our aim is to enhance PNC by leveraging this relationship to fill in missing data rather than simply padding it with zeros.
This is where comes into play.
The architecture of along with the PNC autoencoder is illustrated in Figure <ref>. The model takes as input the encoded information from the preceding K frames (ĉ_̂ĵ's). It then predicts the encoded information for the current frame (c̃_̃ĩ). To address the possibility of missing data, stochastic taildrop is applied to the ĉ_̂ĵ's in the training phase. When there is a missing segment in the received data, the missing parts are filled up to produce ĉ_̂î, using the predicted value from the output of 's decoder component.
§ RESULTS
§.§ Experimental Setup
Our experimental setup utilized two virtual machines (VMs) within a remote cluster farm, modeling the IoT device and edge server in our system. Each VM was provisioned with 2 CPU cores, 4GB RAM, and 100GB of storage, running on the Red Hat Enterprise Linux 8 (64-bit) operating system. To match the software environment of the original PNC paper <cit.>, we configured the system with Python 3.8.7, TensorFlow 2.8 and other matching packages.
The VM emulating the IoT device ran the encoder, while the VM emulating the edge server handled the decoder and frame reconstruction. For networking, we employed a custom TCP connection initiated by the IoT device using the Python sockets library. All frames were iteratively passed through this connection. Each frame was compressed and individually chunked into packets, which were then combined and decompressed at the edge server.
To facilitate the reception of smaller frames without socket blocking, a 3-byte delimiter was added for identification. Additionally, a 3-byte ACK delimiter was used as an acknowledgment signal, allowing the sender and receiver to coordinate when data transmission is permitted.
Network conditions were varied using the Linux Traffic Control Toolkit, a command-line tool that simulates network behavior such as delays, packet loss, and bandwidth limitations. A shell script modified the system to pre-set network configurations modeling different network qualities before initiating video transfer.
Specifically, we tested with a wide range of network conditions. However, for clarity and simplicity in our experiments, we categorized the network conditions into three levels: minimal, medium, and high congestion. The main adjustable parameters in the Linux Traffic Control Toolkit are the data rate (limiting the maximum bandwidth available), burst size (defining the initial amount of data that can be sent at higher speeds before throttling to the set data rate), and latency (the time a packet is held in the buffer before getting processed or dropped). For high network congestion, the data rate was capped at 1 megabit per second, the burst size was set at 32 kilobits, and the latency was set to 400 ms. For medium congestion, the rate was set at 10 megabits per second, with a burst size of 64 kilobits and a latency of 200 ms. Finally, the arguments for minimal congestion were a rate of 50 megabits per second, a burst size of 128 kilobits, and latency of 50 ms.
The general dataflow, including measurements for AVC, PNC, and FrameCorr, remained consistent across all experiments with slight modifications to accommodate specific algorithm requirements. For instance, zero-padding was implemented for missing data depending on the reconstruction algorithm used (required by PNC but not FrameCorr). Our code then measured mean squared error, network latency, bandwidth, and other system metrics.
We opted for a virtual machine (VM) environment in our experiments to gain precise control over network conditions. This approach provided flexibility to simulate various real-world IoT scenarios without the cost and complexity of physical hardware and wireless equipment. Future work involves migrating our testbed to real IoT devices operating on a wireless channel.
§.§ Reconstruction with Complete Data
We train both PNC and on the dataset mentioned in Section <ref> with the objective of minimizing the reconstruction error. The encoder output dimension is set to 10. During training, we utilize the validation dataset to compute the loss after each epoch. We checkpoint the model with the lowest validation loss observed thus far. The training was run for 15 epochs.
The number of bytes a video contains serves as an indirect metric for measuring overall throughput, where more bytes take longer to send.
This correlates with network bandwidth as higher bandwidth allows for higher throughput and vice versa.
We present the number of bytes of encoded information for the 18 videos in our test set for PNC, , and AVC in Figure <ref>. It is notable that AVC consistently requires fewer bytes to encode compared to PNC or .
Additionally, it is noteworthy that the number of bytes of encoded information for PNC remains the same as across videos. This consistency is attributed to their fixed encoder output dimensions of 10.
The Mean Squared Error (MSE) is utilized as the metric to quantify the difference between a reconstructed frame and its original counterpart. Prior to passing through the encoding process, each pixel value is normalized to the range [0,1].
We compute the MSE of each video by first summing the squared differences of the pixel values between the original and reconstructed frames and then averaging those differences with respect to the number of frames embedded in each video. The MSE achieved by PNC, FrameCorr (no feature drops), and AVC are reported in Table <ref>. It turns out the MSE achieved by AVC is consistently lower than those achieved by PNC and , showing the success of traditional compression methods when no data is dropped.
Lastly, we recorded the total time it took for each method to process the entire dataset of videos or video frames as per Table <ref>. It's vital to note that both PNC and took significantly longer (overall more than 3x greater than that of the highest AVC bitrate encoding whose CRF = 18) to process all the video frames. This is most likely due to the fact that 1) a sequence of extracted image frames consists of more bytes than the actual video themselves due to the innate compression techniques AVC embeds into the .mp4 videos and 2) the extra overhead associated with acknowledging packet transfer for not only every frame but also every feature. We also assumed that no timing constraint would be enforced, which also implies no feature vectors are dropped for PNC and .
§.§ Reconstruction with Partial Data
We quantified the percentage of video successfully transmitted for a single video clip under fluctuating network conditions and a set timing constraint (deadline), as documented in Table <ref>. Notably, transmission success was binary: either 0% or 100% of the video was successfully transferred for AVC, reflecting the integral encoding structure of .mp4 videos. Attempting to send a “partial" video would simply break its inherent structure and corrupt its data. In contrast, for PNC and FrameCorr, most if not all video frames were transmitted. However, some features were omitted. Specifically, under minimal congestion, no features were dropped; under medium congestion, an average of 1-2 features per frame were omitted; and under high congestion, an average 3-4 features per frame were dropped.
AVC is unable to reconstruct videos with partial information since the codec requires complete data for the video to be properly decoded. As for PNC and , approximately 1-4 features (out of the 10 features) were dropped; we tasked the decoder with reconstructing the frame using the remaining features respectively and reported the MSE thereafter.
The MSE achieved by PNC and for these scenarios is presented in Tables <ref> and <ref>. We report the results for trained with K=1, which gives the best MSE values among values of K from 1 to 4.
Unexpectedly, PNC surpasses in nearly all video instances, indicating that simply zero-padding the absent segments performs admirably across most scenarios. Moreover, the MSE values rise with the escalation of dropped features, as anticipated. However, the marginal increase observed in both PNC and demonstrates the resilience of these approaches when confronted with missing data events.
§ DISCUSSION
No loss of information. Our results suggest that if we can assume no loss of information, traditional video compression methods such as AVC demonstrate robust performance in reconstructing compressed data. However, training deep learning models poses its own set of hurdles. Firstly, generating a representative dataset may be impractical due to the diverse and intricate nature of real-world data sources. Secondly, model training necessitates fine-tuning hyperparameters and substantial computational resources.
If the application can accommodate data delivery delays, conventional ABR algorithms can dynamically adjust the bitrate in low-bandwidth scenarios to facilitate network data transmission (simply switch to the lower bitrate-encoded video for transfer). Consequently, we contend that in most situations, adopting a state-of-the-art traditional video compression algorithm remains the preferable approach.
Information Loss: Reconstructing frames with incomplete data poses a significant challenge for many video compression algorithms. Traditional methods like AVC require complete encoded information, and any partial loss can result in data corruption. In contrast, deep learning models can handle missing information through zero-padding, pixel prediction, etc. albeit with compromised reconstruction performance. Surprisingly, our study reveals that underperforms compared to the state-of-the-art method, PNC. Several factors may account for this discrepancy:
* Training Discrepancy: We train to predict encoded information for a frame based on other encoded data of previous K frames. However, it's possible that the encoded information space differs significantly between the training, validation, and test sets. As a result, may struggle to accurately predict the encoding for videos in the test set.
* Model Complexity: Our experiments indicate that setting K=1 yields the best MSE value. This suggests that the simple two-layer neural network architecture used in may not adequately capture the relationships among consecutive frames. Employing more sophisticated models like LSTMs could potentially improve performance in this context.
Potential Workflows for
Despite the limitations of , its deep learning-based frame reconstruction approach holds potential value for specific workflows. Consider a system severely constrained in network bandwidth (e.g., a consistently offline remote network) but possessing ample power and compute resources. In this scenario, sending highly condensed sets of features combined with reference frames, coupled with 's reconstruction capabilities, could maximize the amount of usable data transmitted. This approach leverages 's emphasis on local computation rather than network-intensive data transfer. While the high accessibility of the internet currently limits the prevalence of such use cases, there is value in exploring maximizing the potential of limited data through reconstruction, even beyond networking-oriented scenarios.
§ FUTURE WORK
Although did not yield the most promising results, our extensive experimentation highlighted several opportunities to investigate alternative methodologies for improved outcomes and expand our testing into more realistic scenarios.
Integration of FrameCorr with Adaptive Bitrate Streaming. One potential exploration is the integration of the FrameCorr paradigm with ABR-based techniques. By combining FrameCorr's capability to reconstruct frames from partially received data with ABR's dynamic bitrate adjustment, there may be improvements in video quality and resilience to network fluctuations, particularly congested network conditions.
Real-world Implementation on IoT Devices. Although our current experiments were conducted on machines hosted by a remote VM cluster, there is a need to validate our findings on actual IoT devices. Implementing and testing FrameCorr on physical IoT hardware, such as Raspberry Pi devices or other low-power embedded systems, can provide better insights into the practical challenges and performance impacts in real environments.
Furthermore, switching to live data capture or other similar workflow is another area of exploration that will not only scrutinize 's flexibility in live, real-time streaming scenarios but also expose the system to greater I/O latency and storage limitations.
§ CONCLUSION
Despite the prevalent challenge in video capture and processing systems of being unable to transmit complete data due to constraints such as limited time and bandwidth, traditional and deep-learning-based approaches appear to be somewhat ineffective in addressing this issue. Through experimentation with AVC, PNC, and ultimately our extension of PNC, , we found that AVC is unable to cope with partially received data. Conversely, PNC and exhibit suboptimal performance.
Additionally, factors like power consumption, network variability, and hardware specifications demand highly specialized models and setups tailored to the specific use case. We hope this paper offers guidance on navigating the trade-offs for optimal model selection in other IoT/Edge-geared designs.
We are confident that represents a notable step forward in addressing the challenge of effectively managing incomplete data. The current outcomes can be attributed to the selection of a basic model for , which we believe can be remedied through the adoption of a more suitable model and meticulous infrastructure.
plain
|
http://arxiv.org/abs/2409.03745v1 | 20240905175759 | ArtiFade: Learning to Generate High-quality Subject from Blemished Images | [
"Shuya Yang",
"Shaozhe Hao",
"Yukang Cao",
"Kwan-Yee K. Wong"
] | cs.CV | [
"cs.CV"
] |
ArtiFade
〈Φ〉
Orbital Support and Evolution of CX/OX Structures in Boxy/Peanut Bars
Kathryne J. Daniel
September 9, 2024
=====================================================================
§ ABSTRACT
Subject-driven text-to-image generation has witnessed remarkable advancements in its ability to learn and capture characteristics of a subject using only a limited number of images. However, existing methods commonly rely on high-quality images for training and may struggle to generate reasonable images when the input images are blemished by artifacts. This is primarily attributed to the inadequate capability of current techniques in distinguishing subject-related features from disruptive artifacts. In this paper, we introduce ArtiFade to tackle this issue and successfully generate high-quality artifact-free images from blemished datasets. Specifically, ArtiFade exploits fine-tuning of a pre-trained text-to-image model, aiming to remove artifacts. The elimination of artifacts is achieved by utilizing a specialized dataset that encompasses both unblemished images and their corresponding blemished counterparts during fine-tuning. ArtiFade also ensures the preservation of the original generative capabilities inherent within the diffusion model, thereby enhancing the overall performance of subject-driven methods in generating high-quality and artifact-free images. We further devise evaluation benchmarks tailored for this task. Through extensive qualitative and quantitative experiments, we demonstrate the generalizability of ArtiFade in effective artifact removal under both in-distribution and out-of-distribution scenarios.
§ INTRODUCTION
With the rapid advancement of generative diffusion models <cit.>, subject-driven text-to-image generation <cit.>, which aims to capture distinct characteristics of a subject by learning from a few images of the subject, has gained significant attention.
This approach empowers individuals to seamlessly incorporate their preferred subjects into diverse and visually captivating scenes by simply providing text conditions.
Representative works such as Textual Inversion <cit.> and DreamBooth <cit.> have shown promising results on this task. Specifically, Textual Inversion proposes to optimize a textual embedding to encode identity characteristics that provide rich subject information for subsequent generation. DreamBooth shares a similar idea but additionally fine-tunes the diffusion model to preserve more identity semantics. Plenty of successive efforts have been made to advance this task from various perspectives, including generation quality, compositionality, and efficiency <cit.>.
Both of the above mentioned methods, along with their follow-up works, however, rely heavily on the presence of unblemished input images that contain only relevant identity information. This is often expensive or even unavailable in real-world applications.
Instead, in practical scenarios such as scraping web images of a desired subject, it is common to encounter images that are blemished by various visible artifacts such as watermarks, drawings, and stickers.
Additionally, there also exist invisible artifacts like adversarial noises <cit.> that are not easily detectable or removable using off-the-shelf tools. These artifacts can significantly impede the comprehensive learning of the subject and lead to a catastrophic decline in performance across multiple dimensions (see Fig. <ref>). This limitation arises from the feature confusion inherent in the existing subject-driven learning process. The process simultaneously captures subject-related features and disruptive artifact interference. It lacks the discriminative power to distinguish these two from each other, and fails to preserve the integrity of subject characteristics while mitigating negative effects caused by artifacts.
As blemished inputs are inevitable in applications, a pressing challenge emerges: Can we effectively perform subject-driven text-to-image generation using blemished images? We term this novel problem (, generating subject-driven images from blemished inputs) as blemished subject-driven generation in this paper.
To answer the above question, we present ArtiFade, the first model to tackle blemished subject-driven generation by adapting vanilla subject-driven methods (, Textual Inversion <cit.> and DreamBooth <cit.>) to effectively extract subject-specific information from blemished training data. The key objective of ArtiFade is to learn the implicit relationship between natural images and their blemished counterparts through alignment optimization.
Specifically, we introduce a specialized dataset construction method to create pairs of unblemished images and their corresponding counterparts. These pairs can be applied to fine-tune various subject-driven approaches in the context of blemished subject-driven generation. Besides, we also observe fine-tuning an extra learnable embedding in the textual space, named artifact-free embedding, can enhance prompt fidelity in the blemished subject-driven generation.
We further introduce an evaluation benchmark that encompasses (1) multiple test sets of blemished images with diverse artifacts, and (2) tailored metrics for accurately assessing the performance of blemished subject-driven generation methods. A thorough experimental evaluation shows that our method consistently outperforms other existing methods, both qualitatively and quantitatively. Notably, exhibits superb capabilities in handling out-of-distribution (OOD) scenarios involving diverse types of artifacts that are distinct from the training data. This inherent generalizability indicates our model can effectively learn to discern and distinguish the patterns exhibited by artifacts and unblemished images, instead of overfitting to a specific type of artifacts.
In summary, our key contributions are as follows:
* We are the first to tackle the novel challenge of blemished subject-driven generation. To address this task, we propose that fine-tunes diffusion models to align unblemished and blemished data.
* We introduce an evaluation benchmark tailored for effectively assessing the performance of blemished subject-driven generation techniques.
* We conduct extensive experiments and demonstrate that outperforms current methods significantly. We show noteworthy generalizability of , effectively addressing both in-distribution and out-of-distribution scenarios with various types of artifacts.
§ RELATED WORK
Text-to-image synthesis
Text-to-image generation has attracted considerable attention in recent years by leveraging Generative Adversarial Networks (GANs) <cit.> and diffusion models <cit.>. Reed reed2016generative was the first to integrate GANs into text-to-image generation. Since then, several influential works had been proposed <cit.>, demonstrating impressive results with improved resolution <cit.> and fidelity of fine details <cit.>. Diffusion models in text-to-image synthesis have also yielded remarkable results owing to their ability in generating precise and customized images that better align with individual text specifications <cit.>.
Subject-driven generation
Subject-driven generation has gained popularity due to its ability to generate personalized images based on a given set of subject images and text prompts. One prominent method in subject-driven generation is Textual Inversion <cit.>, which involves learning an embedding vector by minimizing the Latent Diffusion Model loss <cit.> on input images. The learned embedding vector can be effectively combined with text prompts, allowing seamless integration in the text-to-image generation process. Recent approaches <cit.> have significantly enhanced subject reconstruction fidelity by incorporating fine-tuning techniques.
Artifacts removal
Shadow and watermark removal are classic tasks in image processing and computer vision. At the early stage, most approaches for shadow removal or image recovery relied on the properties of intensity and illumination <cit.>. Some methods also incorporated color features to improve their results <cit.>. Deep learning techniques and Convolutional Neural Networks (CNNs) have played a significant role in advancing shadow removal methods and producing impressive results in recent years <cit.>. Several studies <cit.> have incorporated GANs to further enhance the results of shadow removal techniques. Moreover, with the increasing popularity of diffusion models in image generation, a novel diffusion-based method for shadow removal has recently been introduced <cit.>.
The most widely adopted methods for recovering concealed information from watermarked images include the application of generalized multi-image matting algorithms <cit.>, complemented by image inpainting techniques <cit.>, and the utilization of deep neural networks and CNNs <cit.>. Similar to shadow removal, GANs and Conditional GANs <cit.> are also widely used in watermark removal tasks <cit.>. Our work is closely related to these previously mentioned studies. We are the first to address the artifact issues in the realm of subject-driven text-to-image generation.
§ METHOD
Given a set of blemished input images, our objective is to eliminate their negative impacts on the quality of subject-driven image generation.
To achieve this goal, we present , an efficient framework that learns to discern and distinguish the patterns exhibited by various types of artifacts and unblemished images.
In this section, we focus exclusively on ArtiFade based on Textual Inversion. However, it is important to note that the ArtiFade framework can be generalized to other subject-driven generation methods.
As shown in Fig. <ref>, based on Textual Inversion incorporates two main components, namely the fine-tuning of the partial parameters (, key and value weights) in the diffusion model and the simultaneous optimization of an artifact-free embedding . We begin by discussing the preliminaries of the Latent Diffusion Model and Textual Inversion. In the following subsections, we elaborate our automatic construction of the training dataset, which consists of both blemished and unblemished data, illustrated in Sec. <ref>.
We then introduce Artifact Rectification Training, a method for fine-tuning the model to accommodate blemished images, as discussed in Sec. <ref>.
We finally present the use of for handling blemished images in Sec. <ref>.
Preliminary Latent Diffusion Model (LDM) <cit.> is a latent text-to-image diffusion model derived from Diffusion Denoising Probabilistic Model (DDPM) <cit.>.
LDM leverages a pre-trained autoencoder to map image features between the image and latent space. This autoencoder comprises an encoder ℰ, which transforms images into latent representations, and a decoder 𝒟, which converts latent representations back into images. The autoencoder is optimized using a set of images so that the reconstructed image x̂≈𝒟(ℰ(x)). Additionally, LDM introduces cross-attention layers <cit.> within the U-Net <cit.>, enabling the integration of text prompts as conditional information during the image generation process.
The LDM loss is defined as
ℒ_LDM := 𝔼_z ∼ℰ(ℐ), y, ϵ∼ N(0,1)[‖ϵ - ϵ_θ(z_t, t, y) ‖_2^2],
where ℰ encodes the image ℐ into the latent representation z. Here, z_t denotes the noisy latent representation at timestep t, ϵ_θ refers to the denoising network, and y represents the text condition that is passed to the cross-attention layer.
Based on LDM, Textual Inversion <cit.> aims to capture the characteristics of a specific subject from a small set of images. Specifically, Textual Inversion learns a unique textual embedding by minimizing Eq. (<ref>) on a few images that contain the particular subject. It can produce promising generation results with high-quality inputs, but fails on input images that are blemished by artifacts (see Fig. <ref>). This problem arises from the inherent limitation of Textual Inversion in learning shared characteristics exhibited in the input images without the capability in differentiating artifacts from unblemished subjects. In this paper, we aim to address this issue on deteriorated generation quality of Textual Inversion in the presence of blemished images.
§.§ Dataset Construction
Existing subject-driven generation methods operate under the assumption of unblemished training data, consisting of solely high-quality images devoid of any artifacts. However, this assumption does not align with real-world applications, where obtaining blemished images from the internet is a commonplace. To address this blemished subject-driven generation in this paper, we first construct a training set that incorporates both unblemished images and their blemished counterparts that are augmented with artifacts.
Augmentation of multiple artifacts
We construct our dataset by collecting a multi-subject set 𝒞 of N image subsets from existing works <cit.> and a set ℬ of L different artifacts:
𝒞 = {𝒮_i}_i=1^N, 𝒮_i = {ℐ_i,j}_j=1^M_i, ℬ={β_k}_k=1^L,
where 𝒮_i denotes the image subset corresponding to the ith subject, M_i is the total number of images in 𝒮_i, and β_k represents a type of artifact for image augmentation. Our dataset 𝒟 can then be constructed by applying each artifact β_k to each image ℐ in 𝒮_i separately, i.e.,
𝒮_i^β_k = {ℐ_i,j^β_k}_j=1^M_i, 𝒟 = {𝒮_i, {𝒮_i^β_k}_k=1^L}_i=1^N,
where ℐ_i,j^β_k is the counterpart of ℐ_i,j augmented with the specific artifact β_k. Some examples of original images and their augmented versions with distinct artifacts can be found in Fig. <ref>. See the Appendix for more visualizations.
Blemished textual embedding
For each blemished subset, we perform Textual Inversion to optimize a blemished textual embedding [𝚅_i^β_k]
, i.e.,
𝒮_i^β_k Textual Inversion [𝚅_i^β_k],
i=1,2,...,N; k=1,2,...,L
By applying Eq. (<ref>) on N subsets with L types of artifacts, we end up with a set of N × L blemished textual embeddings 𝒱 = {[𝚅_i^β_k]}_i=1,k=1^N,L, which will be used in the subsequent model fine-tuning. As we have illustrated in Fig. <ref>, directly prompting the diffusion model with [𝚅_i^β_k] will lead to a significant decrease in generation quality. Consequently, our objective is to robustly handle blemished embeddings and effectively eliminate the detrimental impact of artifacts. We achieve this by devising a partial fine-tuning paradigm for the pre-trained diffusion model on the constructed training set 𝒟, as elaborated in the following subsection.
§.§ Artifact Rectification Training
After establishing the curated dataset 𝒟, we embark on training a generalizable framework on 𝒟, capable of generating unblemished images using blemished textual embeddings.
To this end, we propose artifact rectification training, which consists of two key components, namely partial fine-tuning of a pre-trained diffusion model and the optimization of an artifact-free embedding, to eliminate the artifacts and distortions in the generated images.
We fine-tune only partial parameters that are involved in processing the textual conditions. This strategy allows us to optimize the relevant components associated with the blemished textual embedding [𝚅_i^β_k]. Considering that only the key and value weights in the diffusion model's cross-attention layer are involved in the processing of textual embedding, we choose to fine-tune these two types of parameters W^k and W^v. Moreover, we find that optimizing an additional embedding, , in the textual space with partial parameters could improve prompt fidelity by retaining the textual information of the model, as presented later in Sec. <ref>.
Training objective
During each iteration, we will first randomly sample an unblemished image ℐ_i,j from the training set 𝒟 and a type of artifact β_k ∈ℬ to obtain
the blemished textual embedding [𝚅_i^β_k] ∈𝒱 that is optimized on the blemished subset 𝒮_i^β_k.
Specifically, given the sampled blemished textual embedding [𝚅_i^β_k], we form the prompt “a photo of [𝚅_i^β_k]”, which will be input to the text encoder to acquire the text condition y_i^β_k.
Our optimization objective will then be defined as reconstructing the unblemished image ℐ_i,j by conditioning the denoising process on the text condition y_i^β_k.
Thus, we can formulate the final loss for training as
ℒ_ArtiFade := 𝔼_z ∼ℰ(ℐ_i,j), y_i^β_k, ϵ∼ N(0,1)
[‖ϵ - ϵ_{W^k, W^v, ⟨Φ⟩}(z_t, t, y_i^β_k)‖_2^2],
where {W^k, W^v, ⟨Φ⟩} is the set of the trainable parameters of .
§.§ Subject-driven Generation with Blemished Images
After artifact rectification training, we obtain the ArtiFade model, prepared for the task of blemished subject-driven generation.
Given a test image set 𝒮^β^
'_test in which all images are blemished by an arbitrary artifact β^',
the model can generate high-quality subject-driven images using blemished samples with ease.
Specifically, we first obtain the blemished textual embedding [𝚅_𝚝𝚎𝚜𝚝^β^'] by applying Textual Inversion on the test set 𝒮^β^
'_test. We then simply infer the model with a given text prompt that includes the blemished textual embedding, , “a photo of [𝚅_𝚝𝚎𝚜𝚝^β^']”. At the operational level, the sole distinction between our approach and vanilla Textual Inversion lies in inputting text prompts containing [𝚅_𝚝𝚎𝚜𝚝^β^'] into the fine-tuned instead of the pre-trained diffusion model. This simple yet effective method resolves the issue of Textual Inversion's incapacity to handle blemished input images, bearing practical utility.
Details of ArtiFade models
We choose N= 20 subjects, including pets, plants, containers, toys, and wearable items to ensure a diverse range of categories.
We experiment with the ArtiFade model based on Textual Inversion trained with visible watermark artifacts, namely .
The training set of involves L_WM= 10 types of watermarks, characterized by various fonts, orientations, colors, sizes, and text contents. Therefore, we obtain 200 blemished subsets in total within the training set of . We fine-tune for a total of 16k steps.
§ EXPERIMENT
§.§ Implementation Details
We employ the pre-trained LDM <cit.> following the official implementation of Textual Inversion <cit.> as our base diffusion model. We train the blemished textual embeddings for 5k steps using Textual Inversion.
We use a learning rate of 5e-3 to optimize our Artifact-free embedding and 3e-5 for the partial fine-tuning of key and value weights. Note that all other parameters within the pre-trained diffusion model remain frozen. All experiments are conducted on 2 NVIDIA RTX 3090 GPUs. In the main paper, we focus on the comparison with Textual Inversion and DreamBooth to demonstrate the efficiency of our proposed contributions. See the Appendix for additional comparisons and applications.
§.§ Evaluation Benchmark
Test dataset
We construct the test dataset using 16 novel subjects that differ from the subjects in the training set. These subjects encompass a wide range of categories, including pets, plants, toys, transportation, furniture, and wearable items. We form the visible test artifacts into two categories: (1) in-distribution watermarks () containing the same type as the training data, and (2) out-of-distribution watermarks () of different types from the training data.
Within the and , we synthesize 5 distinct artifacts for each category, resulting in 80 test sets.
Evaluation metrics
We evaluate the performance of blemished subject-driven generation from three perspectives: (1) the fidelity of subject reconstruction, (2) the fidelity of text conditioning, and (3) the effectiveness of mitigating the negative impacts of artifacts. Following common practice <cit.>, we use CLIP <cit.> and DINO <cit.> similarities for measuring these metrics. For the first metric, we calculate the CLIP and DINO similarity between the generated images and the unblemished version of the input images, respectively denoted as ICLIP and IDINO. For the second metric, we calculate the CLIP similarity between the generated images and the text prompt, denoted as TCLIP. For the third metric, we calculate the relative ratio of similarities between generated images and unblemished input images compared to their blemished versions, defined as
R^CLIP = I^CLIP / I^CLIP_β R^DINO = I^DINO / I^DINO_β
where I^CLIP_β and I^DINO_β respectively denote CLIP and DINO similarities between the generated images and the blemished input images. A relative ratio greater than 1 indicates that generated images resemble unblemished images more than blemished counterparts, suggesting fewer artifacts. Conversely, a ratio less than 1 indicates that generated images are heavily distorted with more artifacts. We use DINO ViT-S/16 <cit.> and CLIP ViT-B/32 <cit.> to compute all metrics.
§.§ Quantitative Comparisons
We conduct both in-distribution and out-of-distribution quantitative evaluations of our method and compare it to Textual Inversion with blemished embeddings. We additionally report the results using Textual Inversion on unblemished images as a reference, although it is not a direct comparison to our model.
In-distribution (ID) analysis We consider the in-distribution scenarios by testing on .
In Tab. <ref>, we can observe that the use of blemished embeddings in Textual Inversion leads to comprehensive performance decline including: (1) lower subject reconstruction fidelity (, IDINO and ICLIP) due to the subject distortion in image generation; (2) lower efficiency for artifact removal (, RDINO and RCLIP) due to inability to remove artifacts; (3) lower prompt fidelity (, TCLIP) since the prompt-guided background is unrecognizable due to blemishing artifacts.
In contrast, our method consistently achieves higher scores than Textual Inversion with blemished embeddings across the board, demonstrating the efficiency of in various aspects.
Out-of-distribution (OOD) analysis We pleasantly discover that possesses the capability to handle out-of-distribution scenarios, owing to its training with watermarks of diverse types.
We consider the out-of-distribution (OOD) scenarios for by testing it on ,
as presented in Tab. <ref>. Similar to ID evaluation, all of our metrics yield higher results than Textual Inversion with blemished embeddings. These results further demonstrate the generalizability of our method.
§.§ Qualitative Comparisons
We present qualitative comparisons between the output generated via ArtiFade and Textual Inversion with blemished textual embeddings, including in-distribution scenarios in Fig. <ref> and out-of-distribution scenarios in Fig. <ref>.
In-distribution analysis
The images generated by Textual Inversion exhibit noticeable limitations when using blemished textual embeddings. Specifically, as depicted in Fig. <ref>, all rows predominantly exhibit cases of incorrect backgrounds that are highly polluted by watermarks. By using ArtiFade, we are able to eliminate the background watermarks.
Out-of-distribution analysis
In addition, we conduct experiments with our to showcase its capability to remove out-of-distribution watermarks, as shown in Fig. <ref>. It is important to note that in the first row, the watermark in the input images may not be easily noticed by human eyes upon initial inspection due to the small font size and high image resolution. However, these artifacts have a significant effect when used to train blemished embeddings for generating images. ArtiFade effectively eliminates the artifacts on the generated images, improving reconstruction fidelity and background accuracy, hence leading to substantial enhancements in overall visual quality.
§.§ ArtiFade with DreamBooth
The ArtiFade fine-tuning framework is not limited to Textual Inversion with textual embedding; it can also be generalized to DreamBooth. We use the same training dataset and blemished subsets as in the case of the (, N= 20, L_WM= 10). The vanilla DreamBooth fine-tunes the whole UNet model, which conflicts with the fine-tuning parameters of ArtiFade. We therefore use DreamBooth with low-rank approximation (LoRA)[<https://huggingface.co./docs/peft/main/en/task_guides/dreambooth_lora>] to train LoRA adapters <cit.> for the text encoder, value, and query weights of the diffusion model for each blemished subset using Stable Diffusion v1-5. For simplicity, we will use DreamBooth to refer to DreamBooth with LoRA below. During the fine-tuning of DreamBooth-based ArtiFade, we load the pre-trained adapters and only unfreeze key weights since value weights are reserved for DreamBooth subject information. In Tab. <ref>, it is evident that our method, based on DreamBooth, yields the highest scores among all cases. Our method also maintains DreamBooth's advantages in generating images with higher subject fidelity and more accurate text prompting, outperforming ArtiFade with Textual Inversion. We show some qualitative results in Fig. <ref>.
§.§ Invisible Artifacts Blemished Subject Generation
ArtiFade demonstrates exceptional performance in handling subjects characterized by intricate features and blemished by imperceptible artifacts. We collect 20 human figure datasets from the VGGFace2 dataset <cit.>. We then use the Anti-DreamBooth <cit.> ASPL method to add adversarial noises to each group of images, producing 20 blemished datasets for fine-tuning a DreamBooth-based ArtiFade model. The model is fine-tuned for 12k steps. As illustrated in Fig. <ref>, our approach surpasses the DreamBooth in differentiating the learning of adversarial noises from human face features. In contrast to DreamBooth, which is fooled into overfitting adversarial noises, thereby generating images with a heavily polluted background, our model reconstructs human figures in image generation while maintaining high fidelity through text prompting.
§.§ Ablation Studies
We conduct ablation studies to demonstrate the efficiency of our method by comparing with three alternative variants, which encompass (1) 𝚅𝚊𝚛_𝙰, where we solely fine-tune the artifact-free embedding; (2) 𝚅𝚊𝚛_𝙱, where we fine-tune parameters related to image features, , query weights W^q, along with the artifact-free embedding, and (3) 𝚅𝚊𝚛_𝙲, where we fine-tune key and value weights, , W^k and W^v, exclusively. We use our to compare it with other variants by testing on the .
Effect of partial fine-tuning As shown in Tab. <ref>, compared to 𝚅𝚊𝚛_𝙰, our full method yields higher scores on all metrics by a significant margin, except for RDINO. This is reasonable, as the artifact-free embedding can be easily overfitted to the training data, resulting in generated images that resemble a fusion of training images (Fig. <ref>, 𝚅𝚊𝚛_𝙰). As a result, the denominator of RDINO, namely the similarity between the generated image and the blemished image, is significantly decreased, leading to a high RDINO. Due to similar reason, 𝚅𝚊𝚛_𝙰 shows lowest IDINO, ICLIP, and TCLIP among all variants, indicating that it fails to reconstruct the correct subject.
Overall, both quantitative and qualitative evaluation showcases that solely optimizing the artifact-free embedding is insufficient to capture the distinct characteristics presented in the blemished input image, demonstrating the necessity of partial fine-tuning.
Effect of fine-tuning key and value weights As shown in Tab. <ref> and Fig. <ref>, 𝚅𝚊𝚛_𝙱 yields unsatisfactory outcomes in all aspects compared to ours. The lower RDINO and RCLIP suggest that the generated images retain artifact-like features and bear closer resemblances to the blemished subsets. Furthermore, the reduced TCLIP indicates diminished prompt fidelity, as the approach fails to accurately reconstruct the subject from the blemished embeddings, which is also evidenced by Fig. <ref>. These findings suggest that fine-tuning the parameters associated with text features yields superior enhancements in terms of artifact removal and prompt fidelity.
Effect of the artifact-free embedding With 𝚅𝚊𝚛_𝙲, we exclude the optimization of artifact-free embedding. In Tab. <ref>, we can observe that 𝚅𝚊𝚛_𝙲 yields higher IDINO and ICLIP but lower RDINO and RCLIP compared to our , which indicates that the approach achieves higher subject fidelity but lower efficiency in eliminating artifacts when generating images. Since our primary objective is to generate artifact-free images from blemished textual embedding, our chooses to trade off subject reconstruction fidelity for the ability to remove artifacts. Additionally, this approach produces lower TCLIP than ours, suggesting that the artifact-free embedding effectively improves the model's capability to better preserve text information (see Fig. <ref>).
§ MORE APPLICATIONS
We apply our to more artifact cases, such as stickers and glass effects, showcasing its broad applicability.
Sticker removal. In Fig. <ref>, we test on input images that are blemished by cartoon stickers.
The cartoon sticker exhibits randomized dimensions and is positioned arbitrarily within each image.
can effectively eliminate any stickers while concurrently addressing improper stylistic issues encountered during image generation.
Glass effect removal.
We further test on input images that are blemished by glass effect in Fig. <ref>.
We apply a fluted glass effect to images to replicate real-life scenarios where individuals capture photographs of subjects positioned behind fluted glass. This glass can have specific reflections and blurring, which may compromise the overall quality of image generation when using Textual Inversion. The use of our model can fix the distortions of the subjects and the unexpected background problem, significantly improving image quality.
§ CONCLUSION
In conclusion, we introduce to address the novel problem of generating high-quality and artifact-free images in the blemished subject-driven generation. Our approach involves fine-tuning a diffusion model along with artifact-free embedding to learn the alignment between unblemished images and blemished information.
We present an evaluation benchmark to thoroughly assess a model's capability in the task of blemished subject-driven generation. We demonstrate the effectiveness of ArtiFade in removing artifacts and addressing distortions in subject reconstruction under both in-distribution and out-of-distribution scenarios.
Appendix
Kathryne J. Daniel
September 9, 2024
======================
§ TRAINING DATASET DETAILS
Our training dataset consists of 20 training subjects, used for the fine-tuning stage of our models. We show an example image of each subject in Fig. <ref>. In Fig. <ref>, we showcase several unblemished images alongside their corresponding blemished versions, each featuring one of the 10 watermark types.
§ TEST DATASET DETAILS
In Fig. <ref>, we illustrate our watermark types (see the first row) and watermark types (see the second row). The watermarks are chosen from the training watermarks displayed in Fig. <ref>. On the other hand, the watermarks differ in font size, orientation, content, or color from all the training watermarks presented in Fig. <ref>.
§ ANALYSIS OF WATERMARK DENSITY
In Fig. <ref>, we present results to illustrate the impact of varying watermark densities (, varying qualities), highlighting the robust ability of our to remove watermarks under all conditions.
§ ANALYSIS OF UNBLEMISHED IMAGE RATIO
We employ our to evaluate the performance when the input images contain different proportions of unblemished images. We test our and Textual Inversion on five ratios of unblemished images: 100%, 75%, 50%, 25%, and 0%. The results are shown in Fig. <ref>.
Notably, even when there is only one blemished image in the second column example, the impact on Textual Inversion is already evident, which deteriorates as the ratio decreases. Instead, our method effectively eliminates artifacts in all settings of unblemished image ratio, demonstrating its versatility in real-life scenarios.
§ ANALYSIS OF TRAINING DATASET SIZE
We conduct an analysis to investigate the impact of the number of training subjects (, the size of the training dataset) on the performance of our model. We utilize the same set of artifacts L_WM = 10, as described in Method in the main paper. We construct blemished training datasets in four different sizes: (1) with 5 subjects, (2) with 10 subjects, (3) with 15 subjects, and (4) with 20 subjects. We generate 50, 100, 150, and 200 blemished datasets for each of these cases. Subsequently, we fine-tune four distinct ArtiFade models, each with 16k training steps.
We compare the models trained using different data sizes under the in-distribution scenario (see Fig. <ref>) and under the out-of-distribution scenario (see Fig. <ref>). We note that when the number of training subjects is less than 15, IDINO and TCLIP are relatively lower than the other two cases in both ID and OOD scenarios. This observation can be attributed to a significant likelihood of subject or background overfitting during the reconstruction and image synthesis processes, as visually illustrated in Fig. <ref> and Fig. <ref>. However, as the number of training subjects reaches or exceeds 15, we observe a convergence in the values of IDINO and TCLIP, indicating a reduction in subject overfitting. Regarding RDINO, we note that all cases exhibit values greater than one, with a slightly increasing trend as the number of training subjects rises.
§ FAILURE CASES
We present several failure cases when applying ArtiFade based on Textual Inversion. We demonstrate the limitations of our in Fig. <ref>. Despite the model's ability to eliminate watermarks, we still encounter issues with incorrect subject color, as shown in Fig. <ref>, which arises due to the influence of the watermark color. We also encounter incorrect subject identity in some cases, as demonstrated in Fig. <ref>. One possible reason is that the watermarks significantly contaminate the images, causing the learning process of embedding to focus on the contaminated visual appearance instead of the intact subject.
Another failure case is subject overfitting, as shown in Fig. <ref>. In this case, the constructed subject overfits with a similar subject type that appears in the training dataset. This problem occurs because the blemished embedding of the testing subject closely resembles some blemished embeddings of the training subjects. Surprisingly, we find those problems can be solved by using ArtiFade based on DreamBooth, which is mentioned in Sec. 4.5. Therefore, we recommend using ArtiFade based on DreamBooth when encountering the limitations mentioned above.
§ ADDITIONAL COMPARISON WITH TEXTUAL INVERSION
We use the same training subjects with N=20 from Sec. 3.3 to train an ArtiFade model named using red circle artifacts. For the training set of , due to the simplicity of red circles, we only synthesize a single blemished subset (, L_RC= 1) for each subject, deriving 20 blemished subsets in total. We augment each image with a red circle mark that is randomly scaled and positioned on the source image. Considering the small scale of 's datasets, we only fine-tune for 8k steps.
We further introduce , which applies only one type of artifact (, red circle) to our 16 test subjects, resulting in 16 test sets. We test both and on . The quantitative and qualitative results are shown in Tab. <ref> and Fig. <ref>, respectively.
Quantitative results analysis.
From Tab. <ref>, we can observe that both and yield higher results in nearly all cases than Textual Inversion <cit.> with blemished inputs, showing the capability of our models to eliminate artifacts and generate subjects with higher fidelity. It is important to note that the is considered out-of-distribution with respect to . Nevertheless, the metrics produced by remain comparable to those of , with a minor difference observed. These results provide additional evidence supporting the generalizability of our .
Qualitative results analysis. As illustrated in Fig. <ref>, Textual Inversion struggles with accurate color reconstruction. It also showcases subject distortions and introduces red-circle-like artifacts during image generation when using blemished embeddings. In contrast, our (see Fig. <ref>) and (see Fig. <ref>) are capable of generating high-quality images that accurately reconstruct the color and identities of subjects without any interference from artifacts during the image synthesis.
§ ADDITIONAL QUALITATIVE COMPARISONS
We present additional qualitative results comparing our ArtiFade models with Textual Inversion <cit.> and DreamBooth <cit.> in Fig. <ref>. We employ and ArtiFade based on DreamBooth mentioned in Sec. 4.5.
Textual Inversion generates images with distorted subjects and backgrounds contaminated by watermarks, whereas DreamBooth can effectively capture intricate subject details and accurately reproduce watermark patterns. In contrast, our models (, TI-based and DB-based ArtiFade) generate images devoid of watermark pollution with correct subject identities for both in-distribution (see the first three rows in Fig. <ref>) and out-of-distribution (see the last two rows in Fig. <ref>) cases. Notably, our method based on DreamBooth preserves the high fidelity and finer detail reconstruction benefits of vanilla DreamBooth, even in the context of blemished subject-driven generation.
In Fig. <ref>, we show qualitative results for subjects with complex features (e.g., human faces) using our models, Textual Inversion, DreamBooth and Break-a-Scene <cit.>. Break-a-Scene can separate multiple subjects inside one image. We use Break-a-scene to generate human-only images. However, we find that Break-a-scene fails to separate humans from artifacts, resulting in polluted images.
As a result, our methods (, TI-based and DB-based ArtiFade) consistently surpass Textual Inversion, DreamBooth, and Break-a-Scene, achieving high-quality image generation of complex data in in-distribution cases, as shown in the first two rows of Fig. <ref>, and out-of-distribution cases, as illustrated in the last row of Fig. <ref>.
§ MORE APPLICATIONS
We explore more applications of our , demonstrating its versatility beyond watermark removal. As shown in Fig. <ref>, our model exhibits the capability to effectively eliminate unwanted artifacts from images, enhancing their visual quality. Furthermore, our model showcases the ability to recover incorrect image styles induced by artifacts, thereby restoring the intended style of the images.
§ SOCIAL IMPACT
Our research addresses the emerging challenge of generating content from images with embedded watermarks, a scenario we term blemished subject-driven generation. Users often source images from the internet, some of which may contain watermarks intended to protect the original author's copyright and identity. However, our method is capable of removing various types of watermarks, potentially compromising the authorship and copyright protection. This could lead to increased instances of image piracy and the generation of illicit content. Hence, we advocate for legal compliance and the implementation of usage restrictions to govern the deployment of our technique and subsequent models in the future.
|
http://arxiv.org/abs/2409.02802v1 | 20240904152208 | Boosting Certificate Robustness for Time Series Classification with Efficient Self-Ensemble | [
"Chang Dong",
"Zhengyang Li",
"Liangwei Zheng",
"Weitong Chen",
"Wei Emma Zhang"
] | cs.LG | [
"cs.LG",
"cs.CR",
"stat.ML",
"H.3.3"
] |
0009-0005-1495-6534
[email protected]
The University of Adelaide
Adelaide
SA
Australia
0000-0003-3869-5154
[email protected]
The University of Adelaide
Adelaide
SA
Australia
0009-0007-2793-8110
[email protected]
The University of Adelaide
Adelaide
SA
Australia
0000-0003-1001-7925
Corresponding Author.
[email protected]
The University of Adelaide
Adelaide
SA
Australia
0000-0002-0406-5974
[email protected]
The University of Adelaide
Adelaide
SA
Australia
§ ABSTRACT
Recently, the issue of adversarial robustness in the time series domain has garnered significant attention. However, the available defense mechanisms remain limited, with adversarial training being the predominant approach, though it does not provide theoretical guarantees. Randomized Smoothing has emerged as a standout method due to its ability to certify a provable lower bound on robustness radius under ℓ_p-ball attacks. Recognizing its success, research in the time series domain has started focusing on these aspects. However, existing research predominantly focuses on time series forecasting, or under the non-ℓ_p robustness in statistic feature augmentation for time series classification (TSC). Our review found that Randomized Smoothing performs modestly in TSC, struggling to provide effective assurances on datasets with poor robustness. Therefore, we propose a self-ensemble method to enhance the lower bound of the probability confidence of predicted labels by reducing the variance of classification margins, thereby certifying a larger radius. This approach also addresses the computational overhead issue of Deep Ensemble (DE) while remaining competitive and, in some cases, outperforming it in terms of robustness. Both theoretical analysis and experimental results validate the effectiveness of our method, demonstrating superior performance in robustness testing compared to baseline approaches.
<ccs2012>
<concept>
<concept_id>10002951.10003317.10003365.10010850</concept_id>
<concept_desc>Information systems Adversarial retrieval</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Information systems Adversarial retrieval
Boosting Certificate Robustness for Time Series Classification with Efficient Self-Ensemble
Wei Emma Zhang
September 9, 2024
===========================================================================================
§ INTRODUCTION
Background
In recent years, time series data has become increasingly prevalent. As we transit into the era of Industry 4.0, countless sensors generate vast volumes of time series data <cit.>. Correspondingly, the application of deep neural networks (DNNs) has surged in popularity for time series classification (TSC)<cit.> as they have achieved remarkable success in various domains especially in computer vision (CV)<cit.>. However, DNNs exhibit vulnerabilities to minor perturbations in input data, often leading to misclassification and indicating low resistance to external disruption. This issue has attracted significant attention from the research community to the TSC domain. Fawaz et al. <cit.> first addressed the weakness of TSC towards adversarial attacks on the UCR dataset. Rathore <cit.> introduced target and untargeted attacks in TSC. Pialla <cit.> introduced a smooth version of the attack to make it undetectable, which was further improved by Chang <cit.> who enhanced the stealthiness by leveraging logits information. Ding <cit.> conducted effective black-box attacks on TSC, and Karim <cit.> used adversarial transformation networks to generate adversarial examples. Other research is progressively expanding the landscape of time series attacks.
In contrast, the development of defenses has been relatively limited. Nevertheless, some works have focused on defending against these attacks. Tariq <cit.> proposed using anomaly detection methods to identify adversarial patterns in time series data. Kühne <cit.> combined adversarial training and selective classification to defend against attacks. Abdu <cit.> leveraged random convolutional kernel transformations (ROCKET) as feature extractors to constitute a robust classifier. Siddiqui <cit.> reviewed various adversarial training techniques in TSC benchmarking.
Challenges
Despite substantial efforts to fortify models against such attacks, challenges persist. Although the above empirical defense methods can enhance robustness in some scenarios, they often fall short as new attacks continuously emerge, rendering existing defenses out of date <cit.>. Besides, there is no guarantee above regarding the extent of perturbations a model can withstand. This motivated the study of provable robustness in DNNs to obtain a certified guarantee to resist perturbations. Randomized smoothing, one of the most promising directions in certified defense, was recently proposed by Cohen, et al. <cit.>, Li, et al. <cit.>, and Lecuyer, et al. <cit.> with theoretical guarantees and empirical result. It is simple to transform an arbitrary base classifier f:ℝ^d →ℝ^k to a smoothed classifier g by training with i.i.d. Gaussian noise. The smoothed classifier g returns the most likely result according to the base classifier under x+ϵ∼𝒩(x, σ^2 I). In simpler terms, instead of predicting a specific point x, the model predicts the outcomes for the points surrounding x according to the Gaussian sampling and then combines these predictions through a voting process. This approach helps in making more robust and accurate predictions by considering the uncertainty around each point.
Due to its achievement, in the field of time series, research on robustness has also begun to focus on these aspects. From 2022 to 2023, Yoon, et al. <cit.> and Liu, et al. <cit.> successively applied randomized smoothing techniques to univariate and multivariate time series forecasting tasks, thereby achieving comparatively good robust probabilistic time series forecasting. Additionally, Doppa, et al <cit.> proposed combining randomized smoothing with the statistical features of time series to enhance robustness under non-standard ℓ_p balls.
However, the aforementioned works <cit.> did not thoroughly evaluate the specific performance of Randomized Smoothing on various TSC architectures. Our preliminary experiments found that in some cases of adversarial attacks, Randomized Smoothing did not effectively provide defense as expected (See Case Study in Section 6.3). So, how can Randomized Smoothing performance be improved? Currently, there are two main schools of thought on improving Randomized Smoothing: either by carefully designing the noise distribution or by enhancing the performance of the base classifier
f. While training a classifier will be easier compared with the noise distribution design. For example, Salman, et al. <cit.> combined adversarial training with Randomized Smoothing to enhance performance and Zhai, et al. <cit.> directly optimized the target task as a loss. These methods either require lengthy training times or involve complex optimization. While our intuition is more straightforward, improving the performance of the base classifier by variance reduction through aggregation as Miklós, et al. <cit.>. They found that the Deep Ensemble method <cit.> can indeed enhance model performance, however, this involves training multiple base classifiers, which can also be problematic in computation cost.
Motivation and Contribution
Given the challenges posed by existing methods, we sought to design a simpler approach that does not require multiple training iterations. Initially, we considered methods like MC dropout <cit.> and masksemble <cit.>. MC dropout, being inherently stochastic during prediction, fails to provide theoretical guarantees. Masksemble, on the other hand, involves masking model weights and using a neural network to calculate scores for potential dropout, which complicates training and yields moderate results. Consequently, we propose a self-ensemble method that does not involve training multiple models directly but rather allows a single model to adapt to multiple data distributions as illustrated in Figure <ref>. Our contribution can be summarized into 4 folds:
* Our method significantly reduces training time by m-fold (m is the number of ensembles of the base model). This involves augmenting the training of time series data with masks and then using a few randomly fixed masks for voting classification during prediction.
* We provide a detailed theoretical proof of this approach, showing that the self-ensemble method reduces the variance of classification margin in the base classifier, thereby enhancing the confidence lower bound of the top one class, p_A. This has also been observed in our experiments.
* To claim our findings, we conduct extensive experiments training the base classifier at different noise levels on the UCR benchmark datasets <cit.> using three different model architectures (CNN, RNN, Attention) and evaluated the performance of ensemble methods versus single models. Results show that under CNN and Attention models, self-ensemble significantly outperforms single models, achieving or even exceeding the performance of deep ensembles.
* We conduct a case study in Section 6.3 to assess the performance of these methods and benign models under PGD-ℓ_2 adversarial attacks in different projection radii, finding that self-ensemble can effectively resist adversarial attacks, especially in the model training in the datasets with poor robustness.
§ RELATED WORKS
§.§ Adversarial Attack
In TSC, an adversarial attack refers to a malicious attempt to introduce slight perturbations to a time series x ∈ℝ^d to produce a closely related series x' ∈ℝ^d with the goal of altering the predicted label. This can be characterized by:
argmax {f(x)}argmax {f(x')},
x' = x + r, s.t. ||r||^2 ≪ ||x||^2.
Here, f(x) represents the predicted probability distribution over the labels for the input x. The perturbation r is intentionally small in magnitude relative to x as indicated by their norms.
Adversarial robustness refers to the ability to resist misclassification caused by adversarial perturbations.
§.§ Adversarial Defense
Adversarial defense can be categorized into empirical and certifiable defense. Early efforts primarily focus on empirical methods such as obfuscated gradients, model distillation, input transformation, adversarial detection, and adversarial training.
Some Empirical Defense Obfuscated gradients, a defense technique disrupting gradient continuity or hindering gradient acquisition to thwart attacks, were later successfully bypassed by Athalye et al. <cit.>, demonstrating their ineffectiveness. Model distillation <cit.> trains a teacher neural network on the original dataset and uses the teacher's class probabilities as soft targets for training a student network, enhancing resilience against adversarial attacks. However, adversaries may adapt their strategies if they know the distillation process. Other methods, like input transformation <cit.>, aim to convert adversarial samples into clean space using an "auxiliary module" for safe usage. Additionally, the rising popularity of diffusion techniques has led to the use of denoising techniques to remove noise from adversarial samples <cit.>. Anomaly detection also serves as a non-direct defense mechanism in adversarial settings <cit.>.
Adversarial Traning Empirically, the most practical method is adversarial training. This method involves finding the perturbation from the maximum point within the loss surface using Projected Gradient Descent (PGD), then minimizing it during the normal training process <cit.>. Alternating between these steps can smooth the loss surface near the input point, leading to a smaller Lipschitz constant. This makes it harder for attackers to find adversarial patterns near the input. The PGD adversarial training process can be described as follows:
δ = max_δ, δ_p ≤ϵ L(f(x + δ), y),
where L is the loss function. The model is then trained to minimize this maximum loss:
min_θ𝔼_(x,y) ∼𝒟[ max_δ_p ≤ϵ L(f_θ(x + δ), y) ].
Here, θ represents the model parameters and 𝒟 is the training data distribution respectively. However, adversarial training is time-consuming and lacks a theoretical guarantee.
Certified Defense
In comparison with empirical defense, the certified defense can provide a theoretical guarantee rather than experimental findings. It can be defined as the following work flow. Given a classifier f : 𝒳→𝒴∈{1, …, k}, if f can correctly classify all samples to the same label within an l_p-ball of radius r centered at x, then the classifier is robust around x against l_p attacks with radius r. This indicates that the classifier maintains its predictive consistency for all inputs within this ball.
§ METHODOLOGY
§.§ Preliminary
Randomized Smoothing Randomized Smoothing has recently been proposed <cit.> as its power to resist adversarial attacks in a tight theoretical certified space. It is simple to transform an arbitrary base classifier f:ℝ^d →ℝ^k to a smoothed classifier g by training with i.i.d. Gaussian noise. The smoothed classifier g returns the most likely result according to the base classifier under x+ϵ∼𝒩(x, σ^2 I), i.e.
g(x) = max_c ∈𝒴ℙ_δ∼𝒩(0, σ^2 I)(f(x + δ) = c),
where the noise level σ is a hyperparameter to trade off the accuracy and robustness. In other words, g will return the class c whose decision region {x' ∈ℝ^d: f(x') = c} has the largest probability measure under the distribution of 𝒩(x, σ^2 I). Cohen, et al. <cit.> recently proved a tight robustness guarantee in ℓ_2 norms for smoothing with Gaussian noise. Meanwhile, Lecuyer, et al. <cit.> and Li, et al. <cit.> also demonstrated that the smoothed classifier g will consistently classify within a certified radius around the input x under l_2 norm considerations, based on Differential Privacy (DP) and Rényi divergence <cit.> respectively <cit.>. In this paper, our theoretical analysis chooses divergence-based randomized smoothing from Li, et al. <cit.>. Details are provided in the THEORETICAL ANALYSIS section, Theorem 4.1 <cit.>.
Deep Ensemble for Randomized Smoothing
Ensemble methods, classical in statistical machine learning, enhance predictive performance by aggregating multiple models to reduce their inter-covariance. Introduced in 2016 <cit.>, Deep Ensembles were designed to reduce uncertainty in neural network predictions. This approach has been adapted for Randomized Smoothing (RS) <cit.>, which narrows the prediction distribution range, effectively raising the lower bound of predictions. In this context, ensemble methods with Randomized Smoothing involve employing multiple base classifiers trained with identical architectures but different random seeds. Which was defined as:
f̂(x) = 1/m∑_l=1^m f_l(x),
where f_l(x) are the logits of the outputs from classifiers, and each classifier f_l is trained independently using a unique seed. This configuration ensures diversity among the classifiers, which reduces variance to increase the lower confidence bound of the true majority class probability p_A. However, Deep Ensembles require extensive retraining, leading to resource inefficiency. To address this, we propose the following method.
§.§ Proposed Method
Random Mask Training We propose a training algorithm that incorporates randomized masking based on a normal randomized smoothing procedure, as seen in Algorithm 1. Our intuition is to help models adapt to different types of masks during training, ensuring that the prediction confidence shows small differences among various masks (Details about how to implement masks are described in 5.3 Algorithm Setup). This method can avoid multiple training of deep ensemble, which greatly reduces the computational overhead. In the inference stage, we will leverage the diversity of these masks to obtain the self-ensemble, and the efficacy is competitive and even better in some cases compared with the deep-ensemble.
Self-ensemble Certification To certify the robustness of the smoothed classifier g around x, we designed the process as shown in Algorithm 2. In this framework, a set of fixed masks is randomly generated and kept consistent for all inputs. During the ensemble inference stage, a single sample undergoes m × n trials, where m and n are the total numbers of different masks and noises, respectively. For each noised input x, predictions based on these m masks are aggregated to produce the final output. The class resulting from that iteration increments its count by one. This process will iteratively run n times before the final prediction is determined by hard voting across the n noised inputs. We then calculate the confidence intervals based on the prediction counts for the top one and top two classes A and B using a multinomial distribution, taking the lower bound of p_A and the upper bound of p_B as conservative estimates of p_A and p_B. Based on the prediction, we obtain the lower bound of the certified radius L. This means that for all |x' - x| < L, the prediction will remain consistent.
The reason why we need to fix the masks is to ensure that the model will return identical results for the same input. This is a fundamental requirement of Theorem 4.1 <cit.>. Compared to the training process, masks play a different role in the certification stage. The same model with multiple masks can have a similar effect as a deep ensemble, significantly reducing the variance of p_A and p_B and increasing the gaps between p_A and p_B. This results in an increased certified radius, making the model more robust.
§ THEORETICAL ANALYSIS
To prove our claimed findings, we provide a detailed theoretical analysis. Starting with Theorem 4.1 proved by Li et al. <cit.>, a tight lower bound for the certified radius of an arbitrary classifier g based on Rényi divergence is described as follows:
Theorem 4.1 (From Li, et al <cit.>). Suppose 𝐱∈𝒳, and a potential adversarial example 𝐱' ∈𝒳, such that 𝐱 - 𝐱'_2 ≤ L. Given a k-classifier f : 𝒳∈ℝ^d → p∈ℝ^k, let f(𝒩(𝐱, σ^2 I)) ∼ (p_1, …, p_k) and f( 𝒩(𝐱', σ^2 I)) ∼ (p'_1, …, p'_k), and M_p(x_1,...,x_n) ← (1/n∑_i=1^n x_i^p)^1/p. If the following condition is satisfied, with p_A and p_B being the first and second largest probabilities in {p_i}:
sup_α > 1( -2σ^2/αlog(1 - 2M_1(p_A, p_B) + 2M_1-α(p_A, p_B)) ) ≥ L^2
,
then *arg max_i p_i = *arg max_j p'_j.
To increase the lower bound of L, it is intuitive to increase the noise level σ, however, the model may struggle with the noised input, leading to a drop in accuracy. In turn, increase the gaps between p_A and p_B can also raise L. Here, we propose the self-ensemble method, which can reduce the variance of predictions p_A and p_B and enlarge their gaps. The variance and expectation after self-ensemble can be described as follows:
Lemma 4.2
Let f be a pre-trained classifier with masks and noises over ℝ^d →ℝ^k. Let M_i be a set of m independent binomial masks applied to the input 𝐱 with probability p → 1 of each element being 1. Assume that the input noise ϵ∼𝒩(0, σ^2 I). The m-ensemble average model output is given by:
y = 1/m∑_i=1^m f(M_i ⊙ (𝐱 + ϵ)).
The expectation and variance of the ensemble average output are:
𝔼[y] = 𝔼[f(𝐱)] = 𝐜,
where 𝐜∈ℝ^k is the expectation output of a fixed clean x over randomness in the training process.
Var(y) = 1/m (1+ζ_c (m-1)) Σ_c
+ 1/m (1+ζ_p (m-1)) σ^2 𝔼[J_f(M ⊙𝐱) J_f(M ⊙𝐱)^T]),
where Σ_c is the covariance matrix representing the variability across different training processes, characterizing the randomness. J_f(M ⊙𝐱) is the Jacobian matrix of f. ζ_c is the correlation coefficient for the masked output in the clean part, and ζ_p is the correlation coefficient for the masked Jacobian in the perturbed part.
Proof 4.2 Let y_i = f(M_i ⊙ (𝐱 + ϵ)) be the output for each mask. Since M_i and ϵ are independent, and ϵ has mean 0, we have:
𝔼[y_i] = 𝔼[f(M_i ⊙ (𝐱 + ϵ))] = 𝔼[f(M_i ⊙𝐱)].
During the training process, we observed the network can achieve the same performance with and without the mask, especially when p is close to 1, the mask has minimal influence on the prediction, thus:
𝔼[f(𝐱)] = 𝔼[f(M ⊙𝐱)] = 𝐜.
Averaging over multiple independent M_i:
𝔼[y] = 𝔼[1/m∑_i=1^m y_i] = 1/m∑_i=1^m 𝔼[y_i] = 𝔼[f(𝐱)] = 𝐜.
Now, we compute the variance of the ensemble average output. According to the Taylor Expansion, y_i = f(M_i ⊙𝐱) + M_i ⊙ J_f(𝐱) ϵ.
For the variance of a single y_i:
Var(y_i) = Σ_c + σ^2 J_f(M_i ⊙𝐱) J_f(M_i ⊙𝐱)^T.
For the ensemble average output y:
Var(y) = Var(1/m∑_i=1^m y_i).
Assuming the correlation between y_i and y_j is parameterized by coefficients ζ_c and ζ_p, we have:
Var(y) = 1/m^2( ∑_i=1^m Var(y_i) + ∑_i ≠ jCov(y_i, y_j) ),
where Cov(y_i, y_j) = ζ_c Σ_c + ζ_p σ^2 J_f(M_i ⊙𝐱) J_f(M_i ⊙𝐱)^T. Therefore:
Var(y) = 1/m (1+ζ_c (m-1)) Σ_c
+ 1/m (1+ζ_p (m-1)) σ^2 𝔼[J_f(M ⊙𝐱) J_f(M ⊙𝐱)^T]).
Thus, the Lemma is proved.
Variance Reduction When m = 1 and the mask is all ones, the variance degenerates to:
Var(y) = Σ_c + σ^2 J_f(𝐱) J_f(𝐱)^T.
Instead, after mask ensemble, the two parts of the variance are reduced to 1/m (1 + ζ_c (m-1)) and 1/m (1 + ζ_p (m-1)) times of the original, respectively. When the correlation coefficients are less than 1, these values will always be less than 1 and approach the correlation coefficients ζ_c and ζ_p respectively as m increases.
Additionally, the reduction in variance after masking also comes from the shrinkage of the Jacobian matrix. Due to the implementation of the mask, the Jacobian matrix should be:
𝔼[J_f(M ⊙𝐱) J_f(M ⊙𝐱)^T] = p^2 J_f(𝐱) J_f(𝐱)^T.
Here we assume that the Jacobian matrix is similar in both masked and unmasked training settings. Although in some cases they might not be exactly the same, the difference is relatively small compared to the factor p^2. Therefore, the variance after applying the mask ensemble is:
Var(y) ≈1/m (1 + ζ_c (m-1)) Σ_c
+ 1/m (1 + ζ_p (m-1)) σ^2 p^2 J_f(𝐱) J_f(𝐱)^T.
As the number of ensembles m approaches infinity, the variance can be further simplified to:
lim_m→∞Var(y) ≈ζ_c Σ_c + ζ_p p^2 σ^2 J_f(𝐱) J_f(𝐱)^T.
This result demonstrates that the variance of the ensemble model is reduced due to both the correlation coefficients and the shrinkage of the Jacobian matrix resulting from the masking process.
Theorem 4.3
Let f be a pre-trained classifier over x ∈ℝ^d → y ∈ℝ^k. Given an input x, let x' = x + ϵ where ϵ∼𝒩(0, σ^2 I). Let c_A be the score for class A and c_i be the score for any other class i ≠ A. The classification margins are defined as z_i = c_A - c_i, ∀ i ≠ A. Then, the probability P_A = P(z_i > 0 , ∀ i ≠ A) increases as the variance of y decreases.
Proof 4.3 According to the local linear assumption, the output of the classifier is perturbed as f(x') ∼ f(x) + A ·ϵ, where A is an auxiliary matrix. Thus, f(x') ∼𝒩(f(x), Aσ^2A^T). According to the variance reduction of y from Lemma 4.2, we can know that the auxiliary matrix represents the reduction adjustment of the variance to each class. Thus we know that A should be a matrix with all elements < 1. Let c_A ∼𝒩(μ_A, σ_A^2) and c_i ∼𝒩(μ_i, σ_i^2) for i ≠ A. The classification margins z_i are:
z_i = c_A - c_i ∼𝒩(μ_z_i, σ_z_i^2).
Here, μ_z_i = μ_A - μ_i and σ_z_i^2 = σ_A^2 + σ_i^2. Since z_i ∼𝒩(μ_z_i, σ_z_i^2), the probability P(z_i > 0) is:
P(z_i > 0) = Φ( μ_z_i/σ_z_i),
where Φ is the CDF of the standard normal distribution.
As the variance σ_z_i^2 decreases, μ_z_i/σ_z_i increases because μ_z_i is fixed. Then, we have:
Φ( μ_z_i/σ_z_i,new) > Φ( μ_z_i/σ_z_i),
where σ_z_i,new < σ_z_i. Therefore, P(z_i > 0) increases as σ_z_i^2 decreases. Using the union bound ∀ i ≠ A, we have:
P_A = P(z_i > 0, ∀ i ≠ A) ≥∏_i ≠ A P(z_i > 0).
As each P(z_i > 0) increases with decreasing σ_z_i^2, the overall probability P_A increases. Thus, as the variance σ_z_i^2 decreases, the success probability P_A increases. Thus, as P_A increases, we can simply set P_B = 1 - P_A to make a conservative estimation. Then, the lower bound L in Theorem 4.1 (Li et al.) will increase. As depicted in Figure <ref>, when we fix the noise level σ, higher p_A can guarantee a larger radius.
We further validated our theoretical findings through experiments. Figure 3 illustrates a sample case from our benchmark dataset, ChlorineConcentration Dataset, with a standard deviation (σ) of 0.4 and 1000 noise samples. The distribution of the classification margins z of the ensemble method exhibits fewer values distributed less than 0, indicating a higher probability of being classified as class A (the top 1 prediction of the smoothed classifier g), thus certifying a larger radius. Additionally, the right four distributions in Figure 4 depict the distribution of c_A(x), revealing a noticeable variance reduction of the z distribution in the self-ensemble method compared with the Single one. Notably, the mean values (represented by the green line) of M_B and M_C closely align with the Single method, supporting our assumption that the mask should have minimal influence on the expectation value.
§ EXPERIMENTS
§.§ Experimental Setup
The whole project was developed using PyTorch, and conducted on a server equipped with Nvidia RTX 4090 GPUs, 64 GB RAM, and an AMD EPYC 7320 processor. Table <ref> gives the training time of each method over different datasets in InceptionTime architecture (other models show a similar trend), which claimed our contribution that self-ensemble takes less time for the training process compared with Deep Ensemble (5 models ensemble). https://github.com/Chang-George-Dong/Boosting-Certificate-Robustness-for-Time-Series-Classification-with-Efficient-Self-Ensemble.git[GitHub]
§.§ Datasets
To evaluate the proposed method and compare it with the baseline, we conduct our experiment in diverse univariate time series benchmark datasets <cit.>.
Table <ref> gives a detailed description of these datasets. For the case study, we selected the ChlorineConcentration and Cricket datasets for our demonstration, as ChlorineConcentration is the most susceptible to adversarial attacks among these datasets, while CricketX is more robust. Thereby to provide a clearer illustration of the efficacy of all methods.
§.§ Algorithm Setup
We implemented three different DNN architectures: InceptionTime (CNN) <cit.>, LSTM-FCN (RNN) <cit.>, and MACNN (Attention) <cit.>. All models were trained with noise level σ of 0, 0.1, 0.2, 0.4, 0.8 and 1.6 for 1000 epochs to achieve a smoothed version. We used two kinds of masking settings: binomial masking and continuous masking. In binomial masking (M_B), for each timestamp, there is a probability p to keep the original value; otherwise, it is set to 0. In continuous binomial masking (M_C), there are several continuous mask-lets, combing to the total masking length fixed to p times the sequence length, and the maximum mask segment length set to half of p times the sequence length. Unless specified otherwise, all p were set to 0.9. During the certifying phase, we found that 1000 noise draws were sufficient, and set confidence level β = 0.001. The certified noise level σ used the same settings as the training phase. For adversarial attacks, we used the PGD-ℓ_2 attacks from Madary <cit.> with different epsilons, as listed in the result tables for evaluation.
§.§ Evaluation Metrics
We evaluate two key metrics on these models: (i) the certified accuracy at predetermined radius r and (ii) the average certified radius (ACR). In the case study, we evaluate the adversarial robustness using the Attack Success Rate which is the proportion of misclassified samples by adversarial perturbation.
§ RESULT
§.§ Main Results
Comprehensive Performance Comparison In Table <ref>, we compare our self-ensemble method using two kinds of masks, M_B and M_C, Deep Ensembles (DE), and the Single model across three different types of networks, trained using five different noise levels σ. The results show a consistent trend: as the noise level σ increases, the Averaged Certified Radius (ACR) increases, while the accuracy decreases. In almost every case, ensemble models perform better compared to their individual counterparts. This implies masking is effective in a wide range of datasets.
Notably, the self-ensemble methods using M_B and M_C both perform well on InceptionTime and MACNN networks, often competing with and even outperforming the DE method, while requiring only 1/5 of the training time (all ensemble methods in this table are 5-model ensembles). However, for the LSTM-FCN architecture, M_B and M_C perform worse than the single model. This is primarily because sequential models are highly sensitive to missing values; each input in a sequence model is critical, and a single missing value can easily skew the model's predictions. In contrast, convolutional models are more robust to missing values, especially when the receptive field is large. Convolutions can learn from neighboring values and effectively handle local missing patterns. Missing values in convolutions can be mitigated by the surrounding non-missing values, reducing the bias introduced by the missing data.
Therefore, it is safer to use masks in convolutional and attention-based networks.
Certifed Accuracy over Radius
In comparison with the stationary performance shown in Table <ref>, Figure <ref> illustrates the performance under increasing perturbations, which better reflects real-world scenarios with adversarial noise. We observe that the self-ensemble method is resilient to a wider range of perturbations (indicated by the radius in the figure) while maintaining accuracy, followed by the DE and Single models. However, as the radius increases, the accuracy of all models tends to deteriorate, as explained by Madry et al., "Robustness May Be at Odds with Accuracy" <cit.>. Nevertheless, the degradation is slower in the ensemble methods compared to the single model, highlighting the benefits of ensemble approaches in maintaining performance under adversarial conditions.
§.§ Ablation Study
Ensemble Size Figure <ref> shows the certified accuracy vs. radius for different ensemble sizes. Both methods demonstrate that increasing the ensemble size improves certified robustness. However, the improvement tends to saturate when the ensemble size reaches 10 for M_C and 5 for M_B, while requiring double and quadruple the inference time, respectively. Therefore, we choose an ensemble size of 5 as it provides a good balance between performance and computational efficiency.
Keep Ratio
Figure <ref> shows the influence of the keep ratio on the accuracy-radius curve. It is evident that a high keep ratio (masking to 1) p in both cases ensures better-certified accuracy over the radius and is close to the performance of the non-mask method. This is highly aligned with our theoretical assumptions, a high keep ratio is very close to no masking. The same trend clarifies that a higher keep ratio is practically important to align with the theoretical analysis. Additionally, models tend to be more sensitive to a low keep ratio of M_B compared to M_C, as M_C can retain more continuous patterns that help convolution correctly activate. This implies that a segment is sufficient for CNN recognition, and it may be possible to alleviate the dimensionality curse <cit.> in Randomized Smoothing by distilling to the input size.
§.§ Case Study: Adversarial Robustness
To validate our method's performance on datasets with insufficient robustness compared to the Single model, we also conducted adversarial attacks using PGD-ℓ_2 against the Benign models, single smoothed, and various ensemble smoothed classifiers (σ = 0.4). Table <ref> illustrates the Attack Success Rate (ASR) versus the perturbation level ϵ under PGD-ℓ_2 attack on both the ChlorineConcentration (CC) and CricketX datasets. The results clearly demonstrate that the ensemble approach is significantly more effective in mitigating attacks compared to the Single model. Notably, on the ChlorineConcentration dataset, M_B achieves a remarkable reduction in attack success rate from 0.98 to nearly 0.24, whereas the Single model remains vulnerable with a 55% ASR. As mentioned before, the Single model did not demonstrate strong robustness on this dataset. In contrast, on the more robust CricketX dataset, all four methods perform similarly, with DE being slightly more effective. These findings underscore the robustness of self-ensemble methods in defending against adversarial attacks, highlighting their superiority over individual models.
§ CONCLUSION
In this paper, we utilized self-ensemble to decrease the variance of classification margins, thereby boosting the lower bound of top-1 prediction confidence and certifying a larger radius to enhance model robustness. Theoretical proofs and experimental findings substantiate the effectiveness of our approach, leveraging ensemble techniques while mitigating computational overhead. Our case study illustrates superior robustness compared to baseline methods against adversarial cases.
Future work Despite its efficacy, we observed suboptimal performance in RNN architectures, and it may attributed to their sensitivity to missing values. To address this, we aim to train sequential models resistant to the random missing value in the future. Additionally, in the certification process, the use of random fixed masks may lead to unstable results. Hence, we plan to focus on mask design to enhance the stability of this algorithm.
§ ACKNOWLEDGMENTS
This research work is supported by Australian Research Council Linkage Project (LP230200821),
Australian Research Council Discovery Projects (DP240103070 ),
Australian Research Council ARC Early Career Industry Fellowship (IE230100119),
Australian Research Council ARC Early Career Industry Fellowship (IE240100275), and University of Adelaide, Sustainability FAME Strategy Internal Grant 2023.
unsrt
|
http://arxiv.org/abs/2409.03076v1 | 20240904210428 | Biermann-battery driven magnetized collisionless shock precursors in laser produced plasmas | [
"Timothy Johnson",
"Graeme Sutcliffe",
"Jacob Pearcy",
"Andrew Birkel",
"Gabriel Rigon",
"Neel Kabadi",
"Brandon Lahmann",
"Patrick Adrian",
"Benjamin Reichelt",
"Justin Kunimune",
"Skylar Dannhoff",
"Matt Cufari",
"Frank Tsung",
"Hui Chen",
"Joseph Katz",
"Vladimir Tikhonchuk",
"Chikang Li"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
[email protected]
Plasma Science and Fusion Center, Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA
Plasma Science and Fusion Center, Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA
Plasma Science and Fusion Center, Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA
Plasma Science and Fusion Center, Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA
Plasma Science and Fusion Center, Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA
Plasma Science and Fusion Center, Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA
Plasma Science and Fusion Center, Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA
Plasma Science and Fusion Center, Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA
Plasma Science and Fusion Center, Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA
Plasma Science and Fusion Center, Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA
Plasma Science and Fusion Center, Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA
Plasma Science and Fusion Center, Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA
Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, California 90095, USA
Lawrence Livermore National Laboratory, Livermore, California 94550, USA
Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623, USA
Centre Lasers Intenses et Applications, University of Bordeaux, CNRS, CEA, 33405 Talence, France
The Extreme Light Infrastructure ERIC, ELI Beamlines Facility, 252 41 Dolní Br̆ez̆any, Czech Republic
[email protected]
Plasma Science and Fusion Center, Massachusetts Institute of Technology,
Cambridge, Massachusetts 02139, USA
§ ABSTRACT
This letter reports the first complete observation of magnetized collisionless shock precursors
formed through the compression of Biermann-battery magnetic fields in laser produced plasmas.
At OMEGA, lasers produce a supersonic CH plasma flow which is magnetized with Biermann-battery
magnetic fields.
The plasma flow collides with an unmagnetized hydrogen gas jet plasma to create a magnetized shock precursor.
The situation where the flowing plasma carries the magnetic field is similar to the Venusian bow shock.
Imaging 2ω Thomson scattering confirms that the interaction is
collisionless and shows density and temperature jumps.
Proton radiographs have regions of strong deflections and FLASH
magnetohydrodynamic (MHD) simulations show the presence of Biermann fields
in the Thomson scattering region.
Electrons are accelerated to energies of up to 100 keV in a power-law spectrum.
OSIRIS particle-in-cell (PIC) simulations, initialized with measured parameters,
show the formation of a magnetized shock precursor and corroborate the
experimental observables.
Biermann-battery driven magnetized collisionless shock precursors in laser produced plasmas
C. K. Li
September 9, 2024
===========================================================================================
Collisionless shocks are very common in astrophysical systems.
Counter-streaming plasmas, ranging from Earth's magnetosphere<cit.>
to relativistic astrophysical jets<cit.>, often form shocks which dissipate energy.
Charged particles can be accelerated to high energies inside shocks<cit.>.
Magnetized shocks are one type of collisionless shock<cit.>.
They form when a dynamically significant magnetic field is present
in a system of counter-streaming plasmas and are very common in astrophysics<cit.>.
The majority of planetary bow shocks, an example of a magnetized collisionless shock,
form from the interaction between the weakly magnetized solar wind and a strongly magnetized
planetary ionosphere<cit.>.
Venus, however, has no magnetic field making the solar wind
field responsible for its bow shock<cit.>.
This is one astrophysical situation where the flowing plasma carries the magnetic field
responsible for shock formation.
This letter reports the first complete observation of a Biermann-battery driven
magnetized collisionless shock precursor<cit.>.
There are no externally imposed magnetic fields.
Instead, Biermann-battery fields, generated during the laser drive,
are frozen into the plasma flow.
These fields are compressed in the collision between the plasma flow
and gas jet plasmas.
The magnetic field strength is enhanced, causing gas jet ions to be deflected
and a magnetized shock precursor to be formed.
Since the flowing plasma carries the magnetic field,
the presented experiment is similar to the interaction between the solar wind and
the Venusian ionosphere<cit.>.
The origin of nightside aurorae on Venus is currently unknown<cit.>.
Charged particles accelerated by the bow shock could be responsible.
While magnetized shocks relevant to planetary bow shocks have been studied in the
laboratory, all experiments have focused on the case where the stationary plasma
contains the magnetic field<cit.>.
The experiment presented here demonstrates a platform to study Venus's particular
configuration where the flowing plasma carries the magnetic field.
Such configurations have been studied previously, but the results lacked
direct evidence of the magnetic field<cit.>.
Other experiments with a plasma flow colliding with a gas bag produced an
electromagnetic shock structure, but the gas bag shell
played a significant role in the overall physics of the experiment<cit.>.
Additionally, the results of this experiment show that Biermann-battery generated
magnetic fields can be strong enough to dominate the physics of laser-produced
high-energy-density plasmas.
This conclusion differs from studies of electromagnetic shocks
with planar foils which found that Biermann-battery magnetic fields were
not dynamically important to the overall interaction<cit.>.
A schematic of the OMEGA experiment geometry is shown in Fig. <ref>.
The gas jet produces a volume of hydrogen gas.
Seven 351 nm laser beams each deliver 500 J of energy to a CH hemisphere
in a 1 ns square pulse to produce a plasma flow.
The gas jet volume is ionized prior to the arrival of the plasma flow.
The interaction between the gas jet plasma and the plasma flow
is diagnosed with three diagnostics.
Imaging 2ω Thomson scattering measures the density,
temperature, and velocity profiles at different times.
Proton radiography, using a D^3He backlighter, records particle deflections from
electromagnetic fields.
Electron spectroscopy measures the acceleration of electrons.
Imaging 2ω Thomson scattering collects spatially resolved
electron plasma wave (EPW) and ion acoustic wave (IAW) data
at different times<cit.>.
The Thomson probe beam points directly down
the plasma flow axis and focuses to a region 2 mm from target chamber center (TCC).
The spatial field of view is about 1.5 mm long in the direction of the probe beam
with the Thomson scattering k vector 59.885 degrees off axis.
Fig. <ref> shows results from EPW and IAW measurements.
All times reference the start of the laser drive on the hemispherical target.
Fig. <ref> B) and C) show enhancements of the electron temperature and density
due to the interaction between the gas jet and the plasma flow.
Comparing the location of the density jump across different times shows that the feature
has a velocity of 1000 ± 200 km/s.
The width of the density peak at 4.5 ns is ∼300 μm
(compared to ion skin-depth, ∼100 μm, and Larmor radius, ∼500 μm).
Fig. <ref> D) shows the raw IAW spectrum which
contains two plasma species.
The spectrum centered around the probe wavelength is the gas jet
plasma since it has no appreciable flow velocity.
Its narrow peaks indicate that ZT_e/T_i is large.
Blue shifted from the probe is the plasma flow spectrum.
The flow velocity is close to the flow velocity of the plasma flow without the gas jet plasma,
but with a localized velocity dip seen in Fig. <ref> E).
Flow velocity measurements of the plasma flow confirm low ion-ion collisionality.
Fig. <ref> E) shows the flow velocity profiles with and without the gas jet.
For the plasma flow only, the velocity exceeds 1500 km/s.
The velocity increases farther away from the CH hemisphere target<cit.>.
With flow velocity and density measurements, the interspecies ion-ion collision mean-free-path can be calculated:
λ_mfp = (4πϵ_0)^2/n_2Z_1^2 Z_2^2e^4m_1m_2v_1^4/8πlogΛ
where indices 1 (2) refers to the plasma flow (gas jet), n is the ion density,
Z is the charge state, m is the ion mass, and logΛ is the
Coulomb logarithm<cit.>.
For the experiment, the plasma flow carbon ion mean-free-path is about 7 cm
which is much larger than the system size and the density peak width.
Therefore, the interaction between the plasma flow ions and the gas jet ions is collisionless.
Fig. <ref> F) shows time resolved measurements of ZT_e and T_i for
the gas jet plasma in front of the density jump.
The red box in Fig. <ref> D) shows the region where ZT_e and T_i are
measured.
If Z is assumed to be one (gas is hydrogen),
the ZT_e values agree with the EPW measured T_e values
in front of the temperature peak.
This heating is caused by electron-ion collisional
friction<cit.>.
Such heating is spatially uniform which does not
explain the observed the localized temperature jump.
The density jumps by 2.53±0.15 times and the temperature jumps by
1.94±0.12 times (at 4.5 ns, the gas jet has been heated to about 350 eV).
These jumps are measured with respect to the gas jet.
The measured jumps do not match the Rankine-Hugoniot conditions for the
sonic Mach number of ∼4.
This is due to the interaction being only a shock precursor<cit.>.
Not enough time has elapsed for the shock to be fully formed.
At the probed time, the shock is still developing as seen in Fig. <ref> C)
where the density peak is increasing with time.
Proton radiography images the electromagnetic fields from the plasma flow gas jet interaction
using 3 MeV and 15 MeV protons<cit.>.
Fig. <ref> A) shows a resulting 3 MeV radiograph.
The radiograph has a region of strong deflection
co-located with the Thomson scattering region.
There are no filamentary structures at the probed time meaning that the deflections
cannot be from the Weibel instability or other plasma
instabilities<cit.>.
The only source of fields are Biermann-battery fields from the laser drive.
Deflections in the radiographs come from magnetic fields since electric fields
are ruled out.
A set of 3D Cartesian FLASH ideal MHD simulations with the Biermann-battery term
model the plasma flow Biermann fields before
the collision with the gas jet<cit.>.
These simulations have the same laser conditions and geometry as in the experiment
with the target shifted a realistic 50 μm in the x-direction.
The simulation results in Fig. <ref> B) show the magnetic field topology.
Biermann fields are present along the plasma flow axis and therefore are present
in the Thomson scattering volume, located right outside the simulation domain.
Electron spectroscopy measurements, using the Electron Positron Proton Spectrometer (EPPS)
diagnostic<cit.>,
show the acceleration of electrons into a high energy power-law spectrum.
To find the net acceleration spectrum, shots with and without the gas jet are compared.
The results of this analysis are shown in Fig. <ref>.
A Maxwellian with the maximum measured electron temperature
is fit to the low energy part of the spectrum to emphasize the high energy tail.
A power-law is fit to the high energy non-thermal part of the spectrum yielding a
spectral index of -3.6 and giving clear evidence of electron acceleration.
The quality of the fit is confirmed through a simple r^2 analysis.
Stimulated Raman scattering from the laser passing through the gas jet is ruled out
as a source of fast electrons.
Particle-in-cell (PIC) simulations are performed to study the kinetic aspects of the interaction.
OSIRIS 1D3V PIC simulations show the formation of a magnetized shock precursor for
experimentally relevant conditions<cit.>.
The PIC simulations span 6000 μm of space, have a spatial resolution of 0.034 μm, and have
1000 particles per cell and realistic mass ratios.
Fig. <ref> A) shows the initial conditions of the simulation with a
uniform density profile on the left serving as the gas jet plasma and a self-similar
density profile on the right serving as the plasma flow.
A region of the plasma flow has a uniform magnetic field of 75 kG with an associated
induction electric field to model the Biermann-battery fields.
The left (gas jet) plasma is stationary while the right (plasma flow) plasma flows
into it with a velocity profile similar to Fig. <ref> E).
The simulation captures essential features of magnetized shock precursor formation.
Since the ions are collisionless, the plasma flow ions pass through the interface
resulting in an increase in the total density.
The magnetic field in the plasma flow reflects the gas jet electrons meaning that
the plasma flow electrons alone neutralize the ion charge, causing an increase
in the plasma flow electron density.
Since the magnetic field is frozen into the plasma flow (magnetic Reynolds number ∼ 380), the increase in the
plasma flow electron density also increases the magnetic field strength.
Magnetic flux conservation requires that the magnetic field peak propagates forward
slower than the initial plasma flow velocity.
Simply put, the system forms a magnetized piston immediately after the collision<cit.>.
Fig. <ref> B) shows the profiles after the simulation has evolved.
The magnetic field is strong enough to start deflecting the gas jet ions (gyro-radius ∼500 μm),
seen in Fig. <ref> C), increasing the ion density and moving the density
peak away from the interface at a velocity of 950 km/s.
Since the plasma flow ions have a larger charge to mass ratio compared to the gas jet ions, they are stiffer
to deflection causing the plasma flow density to be unaffected by the magnetic field.
The electric field is enhanced less than the magnetic field,
resulting in a net Lorentz force on the plasma flow ions in the magnetic field region.
As the plasma flow ions spend time in this region, their velocity in
the x-direction is roughly unchanged, but they accrue a non-negligible
velocity in the y-direction.
The Thomson IAW Doppler shift is sensitive to k·v.
In the experimental geometry, the angle between the probe k
vector and the plasma flow axis is about 60 degrees.
Therefore, the Doppler shift is sensitive to the velocity in the y-direction:
k·v = (k_i - k_scos(θ_s))v_x - k_ssin(θ_s)v_y,
where k_i and k_s are the magnitudes
of the probe and collected light wavevectors (assumed to be equal)
and θ_s is the Thomson scattering angle.
Fig. <ref> D) shows the Doppler shift with a localized dip due
to the deflection of the plasma flow ions to the magnetic field which is consistent
with the Thomson IAW velocity measurements shown in Fig. <ref> E).
Note that the brightness of the Thomson IAW features increases with Z and density<cit.>.
While the gas jet ions are deflected, their Doppler shifted spectrum is outshone by the plasma flow ions.
The PIC simulations results match the experimental measurements well.
The velocities of the magnetic field and the density jump are measured in the simulation to be 870 km/s
and 950 km/s respectively, which is comparable to the experimentally measured
velocity of 1000±200 km/s.
The simulated density jump feature has a similar shape and peak value compared to the EPW measurements.
Deflections of the plasma flow ions produce similar Doppler shifts
as the Thomson IAW data.
The formation process of the shock precursor is as follows.
Lasers illuminate the CH hemispherical target and generate strong ∼MG-scale
Biermann-battery magnetic fields.
These fields are frozen into the plasma flow since the magnetic Reynolds number
is large<cit.>.
The plasma flow expands as it travels, reducing the field strength.
A magnetic piston forms when the plasma flow and gas jet interpenetrate,
enhancing the magnetic field strength.
Gas jet ions see the magnetic field and get deflected causing
the density jump to move forward.
The presence of the reflected upstream ions satisfies the
criteria for a magnetized shock precursor<cit.>.
The resulting magnetized precursor has an Alfvén Mach number of M_A∼14
making it supercritical<cit.>.
The observed electron acceleration is not seen in the PIC simulations,
likely due to limitations of the 1D simulation.
Given the enhanced magnetic field strength of ∼200 kG,
different shock acceleration mechanisms can be considered.
The electron gyro-radius is too small for diffusive shock acceleration<cit.>
and the Alfvén Mach number is too small for shock surfing acceleration<cit.>.
The last mechanism left is shock drift acceleration (SDA) where
electrons traveling along the magnetic field ramp are accelerated
by the induction electric field<cit.>.
The observed acceleration is therefore plausibly attributed to SDA.
The observed shock precursor differs from previous experiments
since it moves slower than the initial flow velocity<cit.>.
Viewed from the center-of-mass frame, the shock precursor
is moving backwards towards the CH target.
If fully formed, the shock would be the reverse shock.
The counter-streaming of the plasma flow upstream would, if given enough time and energy, form a forward moving
electromagnetic shock via the beam-Weibel instability<cit.>.
In summary, this letter details and explains the first complete observation of
magnetized collisionless shock precursors in laser-driven plasmas without
externally imposed magnetic fields.
The experiment offers a laboratory example of a situation similar to the formation
of Venus's bow shock.
The observed electron acceleration could be relevant to the unknown origin of the nightside aurorae on Venus.
This work was supported, in part, by the U.S. Department of Energy NNSA MIT Center-of-Excellence under Contract No.
DE-NA0003868, by the National Laser Users Facility under Contract No. DE-NA0003938, and by the NNSA HEDLP program
under Contract No. DE-NA0004129.
Some of the simulations presented in this paper were performed on the MIT-PSFC partition of the Engaging cluster at the MGHPCC facility
(www.mghpcc.org) which was funded by DoE grant number DE-FG02-91-ER54109.
The software used in this work was developed in part by the DOE NNSA and DOE Office of Science-supported Flash Center for
Computational Science at the University of Chicago and the University of Rochester.
The authors acknowledge the OSIRIS Consortium, consisting of UCLA and IST (Portugal) for the use of the OSIRIS 4.0 framework.
This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy
Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231
using NERSC award m1157.
The authors would like to thank the OMEGA operations team for supporting this experiment
as well as R. Frankle and E. Doeg for processing the CR-39.
The authors also thank W. Fox, D. Schaeffer, and A. Milder for helpful discussions.
aip.bst
|
http://arxiv.org/abs/2409.02220v1 | 20240903184442 | Equivariant Poincaré duality for cyclic groups of prime order and the Nielsen realisation problem | [
"Kaif Hilman",
"Dominik Kirstein",
"Christian Kremer"
] | math.AT | [
"math.AT",
"math.GT"
] |
references.bib
cd
arrows, arrows.meta, positioning, calc
arrow style=tikz, diagrams=>=Straight Barb[scale=0.8]
arrow = [-Straight Barb[scale=0.8], line width=0.2mm]
math to/.tip=Glyph[glyph math command=rightarrow],
loop/.tip=Glyph[glyph math command=looparrowleft, swap],
myenum[1]
1.1
0.5pt
@myuline
propPropositionPropositions
lemLemmaLemmas
corCorollaryCorollaries
thmTheoremTheorems
alphThmTheoremTheorems
defnDefinitionDefinitions
notationNotationNotations
consConstructionConstructions
rmkRemarkRemarks
obsObservationObservations
trickTrickTricks
warningWarningWarnings
conjConjectureConjectures
assumpAssumptionAssumptions
recollectRecollectionRecollections
terminologyTerminologyTerminologies
conditionsecConditionConditions
factFactFacts
questionQuestionQuestions
exampleExampleExamples
figureFigureFigures
equation(#2#1#3)
section#2#1#3
section#2#1#3and #2#1#3, #2#1#3, and #2#1#3
thm[subsubsection]Theorem
prop[subsubsection]Proposition
lem[subsubsection]Lemma
cor[subsubsection]Corollary
variation[subsubsection]Variation
alphThmTheorem
thmbis[1]
alphThm
*thm*Theorem
*prop*Proposition
*lem*Lemma
*cor*Corollary
alphConjConjecture
conjbis[1]
alphConj
alphCorCorrollary
corbis[1]
alphCor
alphPropProposition
propbis[1]
alphProp
thmsecTheorem[section]
propsec[thmsec]Proposition
lemsec[thmsec]Lemma
corsec[thmsec]Corollary
definition
*defn*Definition
defn[subsubsection]Definition
cons[subsubsection]Construction
nota[subsubsection]Notation
ass[subsubsection]Assumption
recollect[subsubsection]Recollections
terminology[subsubsection]Terminology
assump[subsubsection]Assumption
conj[subsubsection]Conjecture
idea[subsubsection]Idea
question[subsubsection]Question
assumption[subsubsection]Assumption
example[subsubsection]Example
rmk[subsubsection]Remark
obs[subsubsection]Observation
fact[subsubsection]Fact
note[subsubsection]Note
warning[subsubsection]Warning
ideas[subsubsection]Ideas
constr[subsubsection]Construction
definition
defnsec[thmsec]Definition
questionsec[thmsec]Question
conditionsec[thmsec]Condition
terminologysec[thmsec]Terminology
conssec[thmsec]Construction
mcolim
mR^1lim
mholim
mhocolim
descriptionBox
50pt50pt
Equivariant Poincaré duality for cyclic groups of prime order and the Nielsen realisation problem
Kaif [email protected] Dominik [email protected] Christian [email protected]
September 9, 2024
======================================================================================================================
§ ABSTRACT
In this companion article to <cit.>, we apply the theory of equivariant Poincaré duality developed there in the special case of cyclic groups C_p of prime order to remove, in a special case, a technical condition given by Davis–Lück <cit.> in their work on the Nielsen realisation problem for aspherical manifolds. Along the way, we will also give a complete characterisation of C_p–Poincaré spaces as well as introduce a genuine equivariant refinement of the classical notion of virtual Poincaré duality groups which might be of independent interest.
By Dmharvey, see https://commons.wikimedia.org/wiki/File:Alhambra-p3-closeup.jpg, license CC BY-SA 3.0
§ INTRODUCTION
A famous question due to Jakob Nielsen <cit.> in geometric topology is the following: can any finite subgroup G ⊂π_0 hAut(Σ_g) be lifted to an actual continuous group action on Σ_g, for Σ_g a closed oriented surface of genus g ≥ 0? This turns out to be possible, with Nielsen settling the case of G finite cyclic, and Kerckhoff <cit.> the general case.
In high-dimensions, asking for too direct a generalisation of Nielsen's question inevitably results in wrong statements as there is simply no reason why a general homotopy equivalence h M → M should be homotopic to any homeomorphism. However, rigidity phenomena in the theory of closed aspherical manifolds - closed connected manifolds with contractible universal covers - give some hope in generalising Nielsen's and Kerckhoff's results in this direction. This question may thus fairly be called the “generalised Nielsen realisation problem for aspherical manifolds” and has been investigated quite intensively, see for example <cit.>.[For completeness, we mention here that there is also a large body of work on the Nielsen realisation problem for not necessarily aspherical 4–manifolds, c.f. for instance <cit.>, which has a much more geometric flavour.]
Unfortunately, even the hypothesis of closed aspherical manifolds is not quite adequate, and we refer to <cit.> for a delightful survey of counterexamples.
Nevertheless, it turns out to be quite easy to dodge all potential reasons for counterexamples (for example, the failure for the existence of necessary group extensions due to Raymond–Scott <cit.>) by asking a slight variation of the generalised Nielsen problem:
Let M be an aspherical manifold with fundamental group π and consider an extension of groups
1 →π→Γ→ G → 1
where G is finite of odd order[Taking G to be of odd order implies that certain UNil-valued obstructions vanish, see <cit.> or <cit.>.].
Does the π-action on the universal cover M of M extend to a Γ-action such that
M^H ≃
* if H≤Γ, and H is finite;
∅if H≤Γ, and H is infinite?
Equivalently, does the π-action on the universal cover of M extend to a Γ-action in a way such that the resulting Γ-space models Γ, the universal space for proper Γ-actions?
Provided the answer to <ref> is yes, one may construct a G-action on M by using the residual action on π\M. For an account of the relation of <ref> to the generalized Nielsen realization problem in terms of homomorphisms G →π_0(hAut(M)) ≅Out(π_1(M)), we refer the reader to the introduction of <cit.>.
In this article, we give a positive answer to <ref> in the very special situation when M is high-dimensional, π is hyperbolic, G=C_p for p odd, and if the extension is what we call pseudofree, i.e. if each nontrivial finite subgroup F ⊂Γ satisfies N_Γ F = F.
Geometrically, this predicts that any Γ–manifold model M must have discrete fixed points (see <ref>), whence the name. One of our main results is the following:
Consider a group extension
1 →π→Γ→ C_p → 1
for an odd prime p. Suppose that
(*)
* π = π_1(M) for a closed orientable[Orientability is assumed only to simplify the exposition, and can be removed with some care.] aspherical manifold M of dimension at least 5,
* π is hyperbolic,
* Γ is pseudofree.
Then there exists a cocompact
Γ-manifold model for Γ.
To the best of our knowledge, the most general existence result for manifold models for Γ that does not refer to specific differential geometric constructions is due to Davis-Lück <cit.>, whose methods are mainly surgery– and K–theoretic.
They prove <ref> under an additional necessary group homological “Condition (H)” (c.f. <ref>) on Γ which is previously considered mysterious.
This Condition (H) was discovered by Lück in <cit.> as necessary for the existence of manifold models for Γ, but was also used to construct certain models for Γ which satisfy some kind of equivariant Poincaré duality.
Davis-Lück then show in which situations these can actually be turned into equivariant manifolds.
Condition (H), however, seems complicated and hard to verify.
Our main contribution to this problem is to show that, in the situation of <ref>, Condition (H) is actually automatic, and we achieve this by locating it in the more conceptual context of equivariant Poincaré duality as developed in <cit.>. We hope that these techniques will allow us to go beyond the pseudofree situation where Davis-Lück applies, so that in the future we might be able to construct group actions on aspherical manifolds with nondiscrete fixed point sets. As will be clear later, our main input to remove Davis-Lück's Condition (H), <ref>, does not refer to discrete fixed points at all.
To round off our commentary on <ref>, it should also be noted that, in principle, the theorem reduces the geometric problem to a purely algebraic one of producing the appropriate group extension under the given hypotheses on p and π.
As explained e.g. in <cit.>, there is an obstruction measuring when a homomorphism C_p →(π) is induced by an extension <ref>.
It vanishes for hyperbolic groups as they have trivial center.
For the Nielsen realisation problem, <ref> thus has the following implication.
Let M be a closed orientable aspherical manifold with hyperbolic fundamental group of dimension at least 5, p an odd prime, and α C_p →(π_1M) a homomorphism. Then the Nielsen realisation problem for α admits a solution, provided the associated extension Γ is pseudofree.
Before moving on to elaborate on equivariant Poincaré duality as used in this work, we first state the aforementioned Condition (H) and recall the argument of <cit.> as to why it is necessary for the conclusion of <ref> to hold. This shows that Condition (H) is not merely an artefact of the proof strategy of <cit.> but is rather a point that must be dealt with in one way or another.
For a pseudofree extension <ref>, a result of Lück–Weiermann <cit.> (c.f. <ref>) shows that the subspace Γ^>1 of points in Γ with nontrivial isotropy is discrete, more precisely, Γ^>1≃∐_F ∈ℳΓ/F where F runs through a set of representatives of conjugacy classes of nontrivial finite subgroups. Writing H^Γ_*(X) H_*(X_hΓ;ℤ) for the integral Borel homology of a space X with Γ–action, the condition may be stated as:
[Condition (H)]
For each finite subgroup F≠ 1 of Γ, the composite
H_d^Γ(Γ, Γ^>1)
H^Γ_d-1(Γ^>1)
≃⊕_F' ∈ℳ H_d-1(BF')
H_d-1(BF)
is surjective.
To see why condition (H) is necessary for the existence problem, suppose that there exists a d-dimensional cocompact manifold model N for Γ.
Let us assume for simplicity that N is smooth and that Γ acts smoothly preserving the orientation.
As mentioned before, the singular part N^>1 of N of points with nontrivial isotropy is discrete if the extension is pseudofree.
Denote by Q the complement of an equivariant tubular neighbourhood of N^>1 in N with boundary ∂ Q.
Then the Γ-action on Q is free and the quotient pair (Γ\ Q, Γ\∂ Q) is a compact d-manifold with boundary. See <ref> for an illustration.
Thus, for every path component L of Γ\∂ Q, we obtain the commutative diagram
H_d(Γ\ Q,Γ\∂ Q) [r] [d,"≃"]
H_d-1(Γ\∂ Q) [d] [r, "proj",two heads]
H_d-1(L) [d, two heads]
H_d^Γ(Γ, Γ^>1) [r]
H^Γ_d-1(Γ^>1) [r, "proj"]
H_d-1(BF).
The left vertical arrow is an equivalence using excision and that homology of (Γ\ Q, Γ\∂ Q) agrees with Borel homology of (Q, ∂ Q) as the Γ-action is free.
The fundamental class of (Γ\ Q, Γ\∂ Q) gets sent to a fundamental class of each boundary component along the upper composite, and so the top composite is surjective. Moreover, note that each component L of Γ\∂ Q is obtained as the quotient of a sphere by a free action of the isotropy group F of the corresponding fixed point in N^>1.
Recall that for any free F-action on a (d-1)–sphere S for a finite group F, the map F \ S → BF is is (d-1)–connected and induces a surjection on homology up to degree d-1 so the right vertical map is surjective.
Together this shows that the bottom composite is surjective in each component.
§.§ Equivariant Poincaré duality
Equivariant Poincaré duality is fundamentally about understanding group actions on manifolds. The notion of a G-equivariant Poincaré complex is designed to satisfy more or less all the homological or cohomological constraints that a smooth G-manifold satisfies. In particular, satisfying equivariant Poincaré duality can obstruct the existence of certain group actions on manifolds. This philosophy is old and has been quite successful, and we exploited it in <cit.> to generalise some classical nonexistence results[See also the references therein for more results of this type.] using new methods.
In this article, we want to carry across a different point: equivariant Poincaré duality does not only obstruct, but is also quite useful to construct group actions on manifolds. The testing ground we chose to demonstrate our claim is <ref>. Here, we will only employ the theory of equivariant Poincaré duality for the group G=C_p, and since this case is much simpler than for general compact Lie groups, we hope that it will demystify the more abstract discussions in <cit.>. In particular, we can keep the level of equivariant stable homotopy theory used throughout at a minimum, while still showing some standard manipulations. We hope that readers with an interest in geometric topology and homotopy theory might find this to be a useful first exposition to categories of genuine G-spectra and their uses.
§.§.§ Equivariant Poincaré duality for the group C_p
Our first theoretical goal is to give a characterisation of C_p-Poincaré duality that is adapted to our application on the Nielsen realisation problem.
In <cit.> we showed that if X is a C_p-space, then the following hold:
(*)
* If X is C_p-Poincaré, then the underlying space X^e and the fixed points X^C_p are nonequivariantly Poincaré (c.f. <cit.>).
* Even if X is assumed to be a compact (i.e. equivariantly finitely dominated) C_p-space, requiring X^e and X^C_p to be nonequivariantly Poincaré is not sufficient to guarantee that X is C_p-Poincaré (c.f. <cit.>).
In this article, we identify the precise additional condition needed to ensure that X is C_p–Poincaré in the situation of (2).
[c.f. <ref>]
Let X be a compact C_p-space.
Denote by ε X^C_p→ X^e the inclusion of the fixed points, and assume that both X^C_p and X^e are nonequivariantly Poincaré. Let D_X^C_p∈(X^C_p,^BC_p) be the dualising sheaf of the fixed point space. Then X is C_p-Poincaré if and only if the cofibre of the adjunction unit morphism in (X^C_p,^BC_p)
D_X^C_p→ε^* ε_! D_X^C_p
pointwise lies in ^BC_p_⊆^BC_p, the stable full subcategory generated by the image of ^C_p_e→^BC_p.
More details on the terms appearing in the theorem may be found in the material leading up to <ref>. In effect, this result gives an interpretation of the genuine equivariant notion of Poincaré duality purely in terms of nonequivariant and Borel equivariant properties. The method of proof is based on various cellular manoeuvres in equivariant stable homotopy theory developed in <ref>, which might be of independent interest. Armed with this characterisation, we may now return to the modified Nielsen <ref> as we explain next.
§.§.§ Genunine virtual Poincaré duality groups and the proof of <ref>
Suppose we are given an extension of groups
1 →π→Γ→ C_p → 1
so that there exists a cocompact Γ-manifold N modeling the Γ-homotopy type Γ. Under reasonable assumptions, we might expect that the C_p-space π\ N is C_p-Poincaré. Motivated by this expectation, we will call Γ a genuine virtual Poincaré duality group if π\Γ is a C_p-Poincaré space. In fact, we define the notion of a genuine virtual Poincaré duality group in a broader context in <ref> and we hope that it can be a useful supplement to the classical theory of virtual Poincaré duality groups that appear for example in <cit.>. In any case, using <ref>, we will show our main result:
[c.f. <ref>]
For the extension <ref>, assume that Γ is compact. If
(*)
* for each nontrivial finite subgroup F ⊂Γ, the group W_Γ F is a Poincaré duality group;
* the group π is a Poincaré duality group,
then Γ is a genuine virtual Poincaré duality group.
The significance of this result is that we relate the new notion of genuine virtual Poincaré duality groups, which enjoys good conceptual properties, with the classical notion of Poincaré duality groups, which is easier to check. It will turn out that, in the situation of <ref>, the group Γ satisfies the conditions of <ref>, and so we see that π\Γ is C_p-Poincaré. In particular, this opens up the way for an equivariant fundamental class analysis (c.f. <cit.>) on the problem at hand, yielding the following:
Let Γ be as in <ref>.
Then Γ satisfies Condition (H).
A proof of this, and the more general <ref>, will be given in <ref>. Taking this for granted for the moment, we may now provide the proof of <ref>.
This is now a direct consequence of <cit.>.
The group theoretic assumptions therein are satisfied by <ref>, whereas Condition (H) is shown to hold in <ref>.
§.§ Structural overview
We recall in <ref> some notions and constructions from the theory of equivariant Poincaré duality <cit.> that will be pertinent to our purposes. Next, in <ref>, we work towards proving <ref>, and for this, it will be necessary to develop some theory on compact objects in C_p–genuine spectra. This we do in <ref>, which might be of independent interest. Having set up the requisite basic theory, we return to the problem at hand and define the notion of genuine virtual Poincaré duality groups in <ref>, refining the classical notion of virtual Poincaré duality groups. Therein, we will also prove a characterisation result tailored to our needs. Finally, we put together all the elements and prove <ref> in <ref>.
§.§ Conventions
This paper is written in the language of ∞–categories as set down in <cit.>, and so by a category we will always mean an ∞–category unless stated otherwise.
§.§ Acknowledgements
We are grateful to Wolfgang
Lück and Shmuel Weinberger for numerous helpful conversations and encouragements on
this project. All three authors are supported by the Max Planck Institute for Mathematics in
Bonn. The second and third authors write this article as part of their PhD-thesis. The third author would like to thank the University of Toronto for its hospitality where parts of this article were written.
§ RECOLLECTIONS
There will be two types of equivariance in this paper, each playing a distinct role. The first kind will be defined for an arbitrary Lie group, which is covered in <ref>; the second kind, covered in <ref>, will be defined only for finite groups (in fact, it is defined more generally for compact Lie groups as in <cit.>) and is the one that supports stable homotopy theory and the theory of equivariant Poincaré duality in <ref>. More details on the materials in <ref> together with references to the original sources may be found in <cit.>.
§.§ Equivariant spaces
Throughout, let Γ be a Lie group.
Let (Γ) be the topological category of homogeneous Γ–spaces, the full topological subcategory of the category of topological Γ-spaces
on objects isomorphic to Γ/H for closed subgroups H ≤Γ.
The category _Γ of Γ–spaces is defined as the category of presheaves ((Γ))((Γ),).
[Fundamental adjunctions]
Genuine equivariant spaces participate in many adjunctions, the fundamental one that we will need being the following: let α K→Γ be a continuous homomorphism of Lie groups.
By left Kan extension and restriction along the (opposite) induction functor ^_α(K)→(Γ), we obtain the adjunction
_α(^_α)_!_K [r, shift left = 1] _Γ [l, shift left = 1] _α(^_α)^*.
Specialising to the two cases of α=ι H↣Γ being a closed subgroup and α=θΓ↠ Q being a continuous surjection of Lie groups with kernel N, the adjunction above yields the following two adjunctions which we have given special notations
^Γ_H_ι_H [r, shift left = 1] _Γ [l, shift left = 1] ^Γ_H_ι
N\(-)_θ_Γ [r, shift left = 1] _Q [l, shift left = 1] ^Q_Γ_θ.
Importantly, in the special case of a continuous surjection θΓ↠ Q, we have an adjunction ^_θ(Q)⇌(Γ)^_θ, and so ^Q_Γ=(^_θ)^*≃ (^_θ)_!.
In particular, suppose we have homomorphisms of Lie groups ι N↣Γ and θΓ↠ Q which are injective and surjective, respectively, and such that the composite θ∘ι N → Q is also surjective. Writing π for the kernel of θ, we thus see that (θ∘ι)= N∩π. Since for composable homomorphisms of Lie groups α and β we have _α∘β≃_α∘_β, we see that in this case, there is a natural equivalence of functors _N→_Q
π\^Γ_N(-)≃ (N∩π)\(-).
[Singular parts]
Denote by s(Γ) ⊆(Γ) the full subcategory on the orbits Γ/H with H nontrivial. We then get the adjunction
((Γ)) [shift left = 1.5, "s_!" description] ((Γ)) = _Γ [shift left = 1.5, "s^*" description]
by restricting and left Kan extending along s. We abbreviate (-) = s_!s^*, writing εX→X for the adjunction counit. Note that for an orbit Γ/H∈_Γ we have Γ/H≃∅ if H=e and εΓ/H→Γ/H is an equivalence otherwise.
We refer to X as the singular part of X and think of εX→X as the inclusion of all points with nontrivial isotropy.
In the special case of Γ=C_p, we have (C_p) = {C_p/C_p}≃∗, so that ((C_p))≃. In this case, one can work out that s_! just assigns a space to the constant diagram as an object in _C_p. Moreover, s^*X=X^C_p, and so we will also write X∈_C_p as X^C_p interchangeably in this case.
Let Γ and Q be groups, N ≤Γ a normal subgroup and p Γ→ Q a surjective group homomorphism.
If Q acts on the topological space X, then there is a natural Γ/N-equivariant homeomorphism
N \ X ≅ p(N) \ X.
Here, X is considered as a Γ-space via p, and p(N) \ X is considered as a Γ/N-space via Γ/N → Q/p(N). Specialising to orbit categories, we obtain the commutative diagram
(Q) [r, "^"] [d, "^"] (Γ) [d, "^"]
(Q/p(N)) [r, "^"] (Γ/N).
Applying (-) with the (-)_! functoriality, we get an equivalence of functors
N \_Γ^Q(-) ≃_Γ/N^Q/p(N) p(N) \(-) _Q→_Γ/N.
Given any discrete group Γ and a Γ–space X, as well as a proper normal subgroup N ⊂Γ such that the N-action on X is free, the inclusion X^>1→X induces equivalences
(N \X)^>1 (N \X^>1)^>1 N \X^>1.
Indeed, all functors involved commute with colimits, and the statement is clearly true for orbits, out of which every Γ–space may be built via colimits.
§.§ Genuine equivariant stable homotopy theory
Let G be a finite group. The stable category _G of genuine G–spectra is a refinement of the category ^BG of spectra with G–action with better formal properties. This is a refinement in that ^BG sits fully faithfully in the category _G, in fact in two different ways. One way to define _G, following Barwick, is as the category _G()^×((_G),) of G–Mackey functors in spectra. A good introduction to the materials in this subsection may be found, for instance, in <cit.>.
The category of genuine equivariant spectra is valuable as it is a particularly conducive environment for inductive methods enabled by many compatibility structures between these categories for different groups, expressed in terms of various adjunctions. Moreover, _G should also be thought of as the “universal category of equivariant homology theories” on _G. For example, there is a symmetric monoidal colimit–preserving functor Σ^∞_+_G→_G which is the analogue of the suspension spectrum functor nonequivariant spectra, and _G is generated under colimits by {Σ^∞_+G/H}_H≤ G.
[Restriction–(co)induction]
For a subgroup H≤ G, we have the adjunctions
_G [rr, "^G_H" description] _H [ll, "^G_H"description, bend right = 30][ll, "^G_H"description, bend left = 30]
where moreover, there is a canonical equivalence of functors ^G_H≃^G_H classically known as the Wirthmüller isomorphism.
[Genuine fixed points]
There is a functor (-)^G_G→ called the genuine fixed points functor which, from the Mackey functors perspective, is given by evaluating at G/G∈_G. This participates in an adjunction
^e_G[rr, shift left = 1] _G [ll, shift left = 1] (-)^G
where ^e_G preserves compact objects and is the unique symmetric monoidal colimit preserving functor from to _G. For every subgroup H≤ G, we may also define the genuine H–fixed points functor (-)^H as the composite _G_H.
[Borel fixed points]
There is a standard Bousfield (co)localisation
_G[rr,"β^*"description, two heads] ^BG[ll,bend right = 22, "β_!"' description, hook][ll,bend left = 22, "β_*"' description, hook]
where β^*X≃ X^e, β_!β^*X≃ EG_+⊗ X, and β_*β^*X≃ F(EG_+,X). This well–known pair of adjunctions may for example be worked out from combining <cit.>. Under the Mackey functors perspective, β^* is given by evaluating at G/e∈_G. In particular, we see that ^BG embeds into _G in two different ways, as mentioned above. Via the functor β^* as well as the homotopy orbits (-)_hG, homotopy fixed points (-)^hG, and Tate fixed points (-)^tG functors ^BG→, we may also obtain the functors (-)_hG, (-)^hG, (-)^tG_G→ which also fit in a fibre sequence of functors (-)_hG→ (-)^hG→ (-)^tG. In particular, these functors only depend on the underlying spectrum with G–action.
[Geometric fixed points]
There is a symmetric monoidal colimit–preserving functor Φ^G(-)_G→ called the geometric fixed points which is uniquely characterised by sending Σ^∞_+G/H to 0 when H⪇ G and to when H=G. For a subgroup H≤ G, we may also define Φ^H as the functor _G_H. The collection of functors Φ^H_G→ for all H≤ G is jointly conservative.
The geometric fixed points functor participates in an adjunction
Φ^G_G [rr, shift left = 1] [ll, shift left = 1, hook] Ξ^G
where Ξ^G is fully faithful. For E∈, Ξ^GE∈_G is concretely given by the G–Mackey functor which assigns E to G/G and 0 to all G/H for H⪇ G.
Furthermore, using that is the initial presentably symmetric monoidal stable category, it is also not hard to see that Φ^G^e_G≃𝕀_.
Next, we recall the standard decomposition in the special case of genuine C_p–spectra, which is all that we will need in our work.
[C_p–stable recollement]
Let G=C_p. In this case, some of the adjunctions we have seen fit into a stable recollement (also called split Verdier sequence)
[rr,"Ξ^C_p"description,hook] _C_p[rr,"β^*"description, two heads] [ll,bend right = 25, "Φ^C_p"' description, two heads][ll,bend left = 25, two heads] ^BC_p[ll,bend right = 22, "β_!"' description, hook][ll,bend left = 22, "β_*"' description, hook]
in that the top two layers of composites are fibre–cofibre sequences of presentable stable categories. This may be deduced, for example, from a combination of <cit.> and <cit.>. From this, one obtains for every E∈_C_p a pullback square
E^C_p[dr, phantom , "⌟", near start] Φ^C_pE
E^hC_p E^tC_p
of spectra (c.f. for instance <cit.> or <cit.>).
§.§ Equivariant Poincaré duality
Let G be a finite group.
We briefly recall the theory of G-equivariant Poincaré duality spaces, which is built upon the notion of G–categories. Recall that the category _G of G–categories is defined as ((G),), akin to the category of G–spaces. This category admits an internal functor category (,) for each pair , ∈_G. This satisfies
(,) (G/H) ≃_H(^G_H,^G_H),
where the latter is the category of H–functors from ^G_H to ^G_H. A very important G-category for us is the G-category of genuine G-spectra given by (G/H) = _H.
Since _G is a full subcategory of _G, we may view a G–space X as an object in _G
For a G-space X we denote the unique map to the point by
XX→*.
Write ^X = (X,) for the category of equivariant local systems on X. Explicitly, that amounts to specifying a local system of H-spectra X^H →_H for each subgroup H ⊂ G plus compatibilities.
Colimit, restriction and limit of local systems give two adjunctions
^X[rr, "X_!" description, bend left] [rr, "X_*"' description, bend right]
. [ll, "X^*"'description]
One should think of the colimit X_! E of an equivariant local system E on X as equivariant homology of X twisted by E and similarly of the limit as equivariant twisted cohomology.
The following is a recollection from <cit.>.
A compact G-space X admits an equivariant dualising spectrum D_X∈_G(X, ) which comes together with a collapse map c _G →X_! D_X.
These are uniquely characterised by the property that the induced capping map
cξ(-)X_*(-) →X_!(D_X⊗ -)
is an equivalence.
Applying fixed points and homotopy groups, the collapse map really corresponds to a class in twisted equivariant homology such that capping with it induces an equivalence between equivariant cohomology and twisted equivariant homology.
Let us just mention that there is the larger class of twisted ambidextrous G-spaces for which an equivalence of the form <ref> exists.
A compact (or twisted ambidextrous) G-space X is called G-Poincaré if the dualising spectrum D_X takes values in ().
Let ξ∈_G(X,) be a local system of G-spectra on the G-space X. For an H-fixed point y ∈ X^H, i.e. a map y G/H→X, using the composition
_G(X,) _G(G/H,) ≃_H
we obtain an H-spectrum that we will denote by ξ(y).
Note that a compact (or twisted ambidextrous) G-space X is G-Poincaré if and only if for each y ∈ X^H the value D_X(y) ∈_H is an invertible H-spectrum.
Let X be a G-Poincaré space. Then for each closed subgroup H ≤ G, the space X^H is a (nonequivariant) Poincaré space. Moreover, its dualising spectrum is given as the composite
X^H _H .
[ <cit.>]
Let p be an odd prime and k ≥ 1 an integer.
There exists a compact C_p-space X for which
X^e is contractible while X^C_p≃ℝ P^2k. None such C_p-space is C_p-Poincaré.
In particular, there are compact G-spaces such that all fixed points are nonequivariant Poincaré spaces which are not themselves G-Poincaré.
§ POINCARÉ DUALITY FOR THE GROUP CP
In this section, we investigate equivariant Poincaré duality for the group C_p more closely.
Our goal is to prove <ref> (c.f. <ref>) which gives a somewhat computable condition on a compact C_p-space X to be C_p-Poincaré assuming that X^e and X^C_p are nonequivariant Poincaré spaces. This amounts to checking that D_XX→ lands in invertible objects.
Since invertibility is a pointwise condition and since we already know that D_X^e X^e→ lands in invertible objects, it suffices to show that D_X(x)∈_C_p is invertible for every x∈ X^C_p. Moreover, from our hypothesis and <ref>, we already know that D_X(x)^e and Φ^C_pD_X(x) are invertible spectra. This consideration leads us to record the following well–known observation.
Let E ∈_C_p be such that E^e and Φ^C_p E are invertible. Then:
E is invertibleE is dualisableE is compact.
In _C_p, dualisablity and compactness are equivalent, and invertible spectra are dualisable.
So it suffices to show that if E is dualisable, then it is invertible, i.e. that the counit E ⊗ E^∨→_C_p
is an equivalence.
But this can be checked after applying (-)^e and Φ^C_p(-), which are jointly conservative.
As both of these functors are symmetric monoidal, the counit for E is sent to the counit for E^e and Φ^C_p E, both of which we assumed to be equivalences.
Thus, by virtue of <ref>, our task at hand is tantamount to ensuring that the C_p-spectrum D_X(x) is compact for every x∈ X^C_p. To this end, we will employ various cellular manoeuvres in <ref> to obtain “compact approximations” to any C_p–spectrum; we then characterise compactness of a C_p–spectrum with vanishing geometric fixed points through its underlying Borel-C_p-spectrum in <ref>. Lastly, we combine all these in <ref> to obtain the promised recognition principle for C_p-Poincaré spaces.
Our work on C_p-spectra heavily drew inspiration from at least two sources. The first one being <cit.>, which gives a nice computation of the Picard group of _C_p, and whose methods we expand on. The second one is <cit.>, which gives another compactness (or dualisability) criterion for C_p-spectra. Our approach is not exactly tailored to the methods in the latter source, and we will not need to refer to them, but it might be possible that they give another way of proving the main results in this section.
§.§ Cellular manoeuvres and compact approximations
Recall that a C_p-spectrum is finite if it lies in the stable subcategory of _C_p generated by Σ^∞_+ C_p/e = _e^C_p and Σ^∞_+ C_p/C_p = _e^C_p.
A C_p-spectrum is compact if and only if it is a retract of a finite C_p-spectrum.
Let X ∈_C_p be such that X^e is bounded below and such that π_k(X^e) is a finitely generated abelian group for k ≤ N for some N. Then there is a fiber sequence
F → X → Y
in _C_p such that F is finite, Y^e is N-connected and Φ^C_p X →Φ^C_p Y is an equivalence.
Note that if A → B and B → C are maps of C_p-spectra whose fibers are compact with trivial geometric fixed points, then the composition A → C satisfies the same condition. Thus, by induction it suffices to consider the case where N=0 and X^e is -1-connected. Pick a finite set of generators { f_i → X^e } of π_0(X^e). Then the composition
f = ( ⊕_i X^e X ),
where c denotes the counit, induces a surjection on π_0 upon applying (-)^e. Now define Y to be the cofiber of f and F its source. This finishes the proof since is compact and satisfies Φ^C_p≃ 0.
Unlike taking the appropriate connective covers, the procedure of tucking cannot be used in general to kill the homotopy groups of X^e. The reason is that the effect on the next higher homotopy group is quite brutal. However, if X^e is (l-1)-connected and π_l X^e is a finitely generated free ℤ[C_p]-module, then the proof of <ref> shows that we can kill π_l X^e while making sure that the next higher homology group is unchanged. On the other hand, tucking preserves the geometric fixed points whereas the aforementioned connective covers do not.
Let Q be a compact spectrum, E a C_p-spectrum and f Q →Φ^C_pE a map.
Then there exists a compact C_p-spectrum F, a map g F → E, and an identification Φ^C_p F ≃ Q under which Φ^C_pg = f.
First we reduce to the case where E is compact.
Write E = _i ∈ I E_i as a filtered colimit of compact C_p–spectra.
As Φ^C_p commutes with colimits and Q is compact, there is some i ∈ I for which f Q →Φ^C_p E factors through the compact spectrum Φ^C_p E_i.
If now F is a compact C_p-spectrum together with a map F → E_i which induces the map Q →Φ^C_p E_i on geometric fixed points, then the composite F → E_i → E satisfies the claim.
Now assume that E is compact.
Then there exists k ∈ such that each map Q → T to a k-connected spectrum T is nullhomotopic.
By <ref>, we can find U ∈_C_p together with a map E → U that has compact fiber and that induces an equivalence on geometric fixed points, such that U^e is (k-1)-connected. Thus, Σ U^e is k-connected, and consequently also Σ U_hC_p is k-connected. Note that U is compact as well.
Consider the following diagram where the lower horizontal maps form a fiber sequence and the square is cartesian by <ref>.
Q [rr] [rrrdd, "≃0", bend left=70] [rdd, "a", dashed, bend right] [rd, "b", dashed]
Φ^C_p E [d]
U^C_p [dr, "⌟", phantom, near start] [r] [d]
Φ^C_p U [d]
U^hC_p [r]
U^tC_p [r]
ΣU_hC_p
The nullhomotopy of the long bent arrow induces the dashed morphism a, which in turn determines the dashed morphism b. By the adjunction from <ref>, the map b is adjoint to a map
q _C_p^e Q → U which induces the map Q →Φ^C_p U on geometric fixed points. This fits into a fibre sequence F → E →(q) where the map E →(q) is induced by the map E → U from above.
Note that F is compact as E is compact, U is compact as observed above, and _C_p^e preserves compactness.
On geometric fixed points, under the identification Φ^C_p E Φ^C_p U, this gives
Φ^C_p F ≃(Φ^C_p E →(Q →Φ^C_p E)) ≃ Q
as desired.
Clearly, this identifies the map Φ^C_p F →Φ^C_pE with f.
If E is a C_p-spectrum with Φ^C_p E compact, then there exists a finite C_p-spectrum F and a map g F → E
which induces an equivalence on geometric fixed points.
Set f = 𝕀_Φ^C_pE in <ref>.
The following lemma will be useful later.
Consider a cospan X Z Y in _C_p where f and g induce equivalences on geometric fixed points. Additionally suppose Φ^C_p X∈ is compact. Then there exists a commutative square
F [r] [d] Y [d, "g"]
X [r, "f"] Z
with F compact such that all maps are equivalences on geometric fixed points.
Let E = X ×_Z Y and find a fiber sequence F → E → E'
with F compact and Φ^C_p E' ≃ 0 as provided by <ref>.
Then the outer quadrilateral in the diagram
F [rdd, bend right] [rd] [rrd, bend left]
E [d] [r]
Y [d]
X [r]
Z
has the desired properties.
§.§ Compact and induced –spectra
In this section we characterise compactness for C_p-spectra with trivial geometric fixed points through their underlying Borel C_p-spectrum.
Notice that C_p–spectra with vanishing geometric fixed points have the following crucial properties.
Let X be a C_p-spectrum with Φ^C_p X ≃ 0. Then
(*)
* for every Y∈_C_p the map (-)^e__C_p(X,Y) →_^BC_p(X^e,Y^e) is an equivalence;
* the C_p-spectrum X is compact in _C_p if and only if X^e is compact in ^BC_p.
We use the notations from <ref>. Observe by <ref> that Φ^C_pX≃ 0 is equivalent to the condition that the adjunction counit β_!β^*X→ X is an equivalence. Now for (1), just note that
__C_p(X,Y) __C_p(β_!β^*X,Y) _^BC_p(β^*X,β^*Y) as claimed. For (2), note that β_! ^BC_p→_C_p preserves and detects compactness as it is fully faithful and admits the colimit preserving right adjoint β^*.
This shows that X^e = β^*X ∈^BC_p is compact if and only if X ≃β_! β^* X ∈_C_p is compact.
Following the notation from <cit.>, we write ^BC_p_ind⊆^BC_p for the smallest idempotent–complete stable subcategory generated by the image of the functor ^C_p_e→^BC_p. Similarly, we write ^ind_C_p⊆_C_p for the smallest idempotent–complete stable subcategory containing the image of ^C_p_e→_C_p.
By <ref> (1), the functor (-)^e _C_p→^BC_p restricts to a fully faithful functor ^ind_C_p→^BC_p_ind, which is also essentially surjective (and so is an equivalence) since (-)^e=β^* is essentially surjective and β_! and β^* are compatible with ^C_p_e.
As full subcategories of ^BC_p, we have the equality
(^ω)^BC_p∩^BC_p_ind = (^BC_p)^ω.
Thus, if E ∈_C_p with Φ^C_p E = 0, then E is compact if and only if E^e ∈ (^ω)^BC_p∩^BC_p_ind.
The inclusion (^ω)^BC_p∩^BC_p_ind⊇ (^BC_p)^ω is clear since (^BC_p)^ω is generated under finite colimits and retracts by _+C_p/e≃^C_p_e. For the converse, we use that
(^BC_p)^ω↪ (^ω)^BC_p↠_C_p() (^ω)^BC_p/(^BC_p)^ω
is a fibre sequence of small stable categories (c.f. for instance <cit.>) and that for X,Y∈(^ω)^BC_p we have the formula (c.f. for instance <cit.>)
__C_p()(X,Y)≃ (Y⊗ DX)^tC_p,
where DX is the pointwise Spanier–Whitehead dual in (^ω)^BC_p.
Observe that for any X∈^BC_p and Y∈^BC_p_ind one has (Y⊗(X,))^tC_p≃ 0 owing to the fact that (-)^tC_p vanishes on ^BC_p_ind and that ^BC_p_ind⊆^BC_p is a tensor–ideal. Therefore, for Z∈ (^ω)^BC_p∩^BC_p_ind, we see that
__C_p()(Z,Z)≃ (Z⊗ DZ)^tC_p≃ 0
and so Z is in the kernel of the functor (^ω)^BC_p→_C_p(). Hence, by the fibre sequence above, we see that Z∈ (^BC_p)^ω as required.
The statement about compact C_p-spectra follows by combining the first part with <ref> (2).
§.§ Recognising –Poincaré spaces
[Contravariant functoriality of dualising spectra]
Consider a map f Y→X in _C_p^ω.
We explain how to construct a canonical “wrong–way” map
^f D_X⟶ f_!D_Y.
Combining the contravariant functoriality of cohomology from <cit.> with the defining property of the dualising spectrum, we obtain the natural transformation
X_! (D_X⊗ -)
≃ X_*(-)
Y_* f^*(-)
≃ Y_!(D_Y⊗ f^*(-))
≃ X_!(f_! D_Y⊗ -)
By the classification of colimit preserving functors, see <cit.> or <cit.>, this is induced by a map ^f D_X→ f_! D_Y.
Now consider X∈_C_p^ω.
For the inclusion εX^C_p→X of the singular part from <ref> we thus obtain a map ^ε D_X→ε_!D_X^C_p.
Applying ε^* yields the map
ε^* ^εε^*D_X⟶ε^*ε_!D_X^C_p,
which may be viewed as a morphism in the nonparametrised functor category (X^C_p,_C_p) ≃_C_p(X^C_p, ) - this equivalence may be obtained by applying <cit.> to the adjunction ^e_G⇌_G (-)^G.
The wrong–way map <ref> satisfies the following key vanishing result permitting our characterisation of C_p–Poincaré spaces. By virtue of the lemma, the cofibre of <ref> may be viewed as measuring the “geometric free part” of the dualising sheaf D_X.
Let X∈_C_p^ω and let ν_C_p→D be an exact functor which vanishes on ^ind_C_p. Then the map
ν(ε^* D_X) ⟶ν(ε^* ε_! D_X^C_p)
in (X^C_p,D) induced by <ref> is an equivalence.
We have to show, for any x ∈ X^C_p, that the map
ν(D_X(x)) ν(f_! D_X^C_p(x)) is an equivalence.
First, let us show that for any compact C_p-space X the map εX^C_p→X induces an equivalence
ν(X_*(-)) ≃ν(X^C_p_*ε^*(-)).
If εX^C_p→X is an equivalence, this is a tautology. The class of spaces for which the assertion is true is moreover stable under pushouts, retracts and contains Y = C_p/e, as
0 ≃ν(∅_*ϵ^*(-)) ≃ν(C_p/e_*(-))≃ν(C_p/e_!(-)) ≃ν(_e^C_p(-)) ≃ 0.
Using that X_* ≃ X_!(D_X⊗ -) and X^C_p_* ϵ^* ≃ X^C_p_! (D_X^C_p⊗ϵ^* -) ≃ X_! (ϵ_! D_X^C_p⊗ -) we obtain the equivalence
ν(X_! (D_X⊗-)) ≃ν(X_!(ε_! D_X^C_p⊗-)).
Now consider a fixed point x * → X^C_p (which we also view as x→X).
Note that the projection formula provides an equivalence, natural in E∈^X
X_!(E ⊗ x_!(_C_p))≃ X_! x_!(x^*E) ≃ x^*E = E(x).
Thus, for any x ∈ X^C_p, the map
ν(D_X(x)) ≃ν(ϵ_! D_X^C_p(x)) is an equivalence, whence the result.
We are now ready to prove our main result characterising C_p–Poincaré spaces. Note that, unlike <ref>, the key characterising property is given solely in terms of D_X^C_p and does not involve D_X.
Let X be a compact C_p-space for which X^e and X^C_p are (nonequivariant) Poincaré spaces. Then X is C_p-Poincaré if and only if the cofiber
(D_X^C_p→ε^* ε_! D_X^C_p)^e ∈(X^C_p, ^BC_p)
pointwise lies in the stable subcategory ^BC_p_ind⊆^BC_p.
As X^e is assumed to be Poincaré the specta D_X(y) = D_X^e(y)∈ are invertible for all y∈ X^e.
Furthermore, as X^C_p is Poincaré we know that Φ^C_p D_X(x) = D_X^C_p(x) is invertible for all x∈ X^C_p.
It now follows from <ref> that X is C_p-Poincaré if and only if the C_p-spectrum D_X(x) is compact for all points x∈ X^C_p.
Note that all maps in the bottom right cospan in the diagram
F [d, dashed, "g'"] [r, dashed, "f'"]
D_X^C_p(x) [d, "g"]
D_X(x) [r, "f"]
ε_! D_X^C_p(x)
induce equivalences on geometric fixed points: the map f by <ref>, and the map g by <cit.>. We can use <ref> to complete <ref> to a commutative square of C_p-spectra where F is compact and all maps are equivalences on geometric fixed points.
Consider the exact functor
ν = (_C_p^BC_p→^BC_p/^BC_p_).
Note that as g' and f' are maps between compact C_p-spectra that induce an equivalence on geometric fixed points, <ref> shows that ν((f')) ≃ν((g')) ≃ 0, so ν(f') and ν(g') are equivalences.
Let us first assume that X is C_p-Poincaré.
It follows from <ref> that ν((f)) ≃ 0, so also ν(f) is an equivalence.
Thus, ν(g) is an equivalence from which we obtain ν((g)) ≃ 0, proving one direction of the claim.
For the other direction, assume ν((g)) ≃ 0, i.e. that ν(g) is an equivalence.
As before, as ν(f) and ν(f') are equivalences we obtain that ν((g')) ≃ 0.
By definition of ν, this means (g')^e ∈^BC_p_ind.
Now, F∈_C_p and D_X(x)∈_C_p both have compact underlying spectra. Hence, it follows from <ref> that (g')∈_C_p is compact.
But then D_X(x) is compact too, as was to be shown.
§ GENUINE VIRTUAL POINCARÉ DUALITY GROUPS
In this section, we define a refinement of the classical notion of virtual Poincaré duality groups. Recall that a Poincaré duality group is a discrete group π such that Bπ is a Poincaré space, and a group is a virtual Poincaré duality group if it contains a Poincaré duality group of finite index. In this case, every finite index torsionfree subgroup will be a Poincaré duality group.
Now, if Γ is a discrete group and π a finite index torsionfree normal subgroup, then the space Bπ can be enhanced to a Γ/π-space in a canonical way by viewing it as the quotient π\Γ of the universal space for the family of finite subgroups of Γ.
One might naturally wonder if that Γ/π-space is Γ/π-Poincaré, in which case we call Γ a genuine virtual Poincaré duality group.
In fact, we will give a slightly more general definition that also includes the case where Γ is a Lie group.
Heuristically, genuine virtual Poincaré duality groups are those which capture the homotopical properties of groups for which the universal space of proper actions admits a smooth manifold model.
§.§ Universal spaces for proper actions
Let us first collect some examples and constructions for universal spaces for proper actions from the literature.
Let Γ be a Lie group. By
we denote the family of compact subgroups of Γ.
The universal space for proper actions is the universal space for the family , and is denoted by Γ.
If Γ is discrete, the family of compact subgroups of Γ agrees with the family of finite subgroups, and we denote it by .
[<cit.>, Thm 4.15.]
Assume the Lie group Γ acts properly, smoothly and isometrically on a simply connected complete Riemannian manifold M with nonpositive sectional curvature. Then M with its Γ-action provides a model for Γ.
[<cit.>]
Suppose, Γ is a hyperbolic (discrete) group.
Then a barycentric subdivision of the Rips complex of Γ for sufficiently high δ > 0 for a word metric on Γ provides a finite Γ-CW model for Γ. In particular, the Γ-space Γ is compact.
More examples for geometrically interesting models for the universal spaces for proper actions can be found in the survey <cit.> and the references therein.
For the next statement, recall that for a subgroup H ≤Γ, its normaliser is defined as N_Γ H {g ∈Γ| g H g^-1 = H} and its Weyl group as W_Γ H N_Γ H / H.
Let Γ be a discrete group such that each nontrivial finite subgroup is contained in a unique maximal finite subgroup.
Let ℳ be a set of representatives of conjugacy classes of maximal finite subgroups of Γ.
Then the square
□∐_F ∈ℳ_N_Γ F^ΓEN_Γ F[r] [d]
EΓ[d]
∐_F ∈ℳ_N_Γ F^ΓN_Γ F[r]
Γ
is a pushout in _Γ.
To prove <ref> one checks that <ref> is a pushout on all fixed points by distinguishing the three cases H = 1, H ≠ 1 finite and H infinite.
Examples of groups for which it applies are discrete groups Γ for which there exists π torsionfree and an extension
1 →π→Γ→ C_p → 1.
If Γ is assumed pseudofree, then we see that Γ^>1 is discrete.
Let us give an illustrative geometric example.
Consider the group Γ = p3, the symmetry group of the wallpaper depicted in <ref>, with its action on the euclidean plane.
This action is isometric and has finite stabilisers, so by <ref> it is a model for the universal space of proper action of p3.
Note that the translations form a normal torsionfree subgroup of p3 of index 3.
We can apply <ref> to obtain that the subspace of the plane with nontrivial Γ-isotropy is Γ-homotopy equivalent to ∐_F_N_Γ F^ΓN_Γ F.
Now from the picture it is easy to read off that the singular part is indeed discrete and consists of three Γ-orbits.
We conclude that Γ has precisely three conjugacy classes of nontrivial finite subgroups, and each such nontrivial finite F ⊂Γ satisfies N_Γ F = F.
§.§ Equivariant Poincaré duality for groups
For the following, a closed subgroup π⊂Γ of a Lie group is cocompact if the topological space π\Γ is compact. If π⊂Γ is normal, this is equivalent to Γ/π being a compact Lie group.
Let Γ be a Lie group and let π⊂Γ be a cocompact torsionfree discrete normal subgroup. We write
Γππ\Γ∈_Γ/π
for the quotient of Γ by the action of π.
The following definition is supposed to capture the homological properties of Lie groups Γ that admit a cocompact smooth manifold model for Γ (and a discrete torsionfree normal cocompact subgroup).
If Γ is torsionfree, it reduces to Γ being a Poincaré duality group. Here, and only here we refer to Poincaré duality for compact Lie groups as also developed in <cit.>, but the reader mainly interested in discrete group actions can assume Γ to be discrete throughout.
Let Γ be a Lie group. Then Γ is called a genuine virtual Poincaré duality group if
it has a cocompact torsionfree normal subgroup, and if for any such cocompact torsionfree normal subgroup π⊂Γ, the Γ/π-space
Γπ is a Γ/π-Poincaré space.
Suppose Γ is a Lie group with torsionfree cocompact normal subgroups π, π' ⊂Γ whose intersection is again cocompact. Then
Γπ is Γ/π-PoincaréΓπ' is Γ/π'-Poincaré.
If suffices to consider the case where π⊆π'.
Note that the normal subgroup π' / π⊆Γ / π acts freely on Γπ.
Applying <cit.>, we see that Γπ is Γ/π-Poincaré if and only if (π' / π) \Γπ≃Γπ' is Γ/π'-Poincaré.
Note that, in general, the intersection of two cocompact subgroups is not again cocompact, e.g. for ℤ, √(2)ℤ⊂ℝ.
In the case where Γ is discrete, the intersection of two finite index subgroups is again finite, from which we obtain the following result.
Suppose, Γ is a discrete group. Then the following are equivalent.
(*)
* The group Γ is a genuine virtual Poincaré duality group.
* There exists some torsionfree finite index normal subgroup π⊂Γ such that the Γ/π-space Γπ is Γ/π-Poincaré.
§ EXTENSIONS BY CP
In this section, we study genuine virtual Poincaré duality groups sitting in an extension
1 →π→Γ→ C_p → 1
more closely.
In <ref> we will prove the characterisation from <ref>;
in <ref> we will use this to prove property (H) for pseudofree extensions.
§.§ Characterisation of genuine virtual Poincaré duality groups
Consider an extension of groups of the form <ref> where π is torsionfree. Write ℳ for a complete set of representatives of the conjugacy classes of nontrivial finite subgroups of Γ.
(*)
* If F ≤Γ is a nontrivial subgroup with π∩ F = e (e.g. F is finite), then the composition of F↣Γ→ C_p is an isomorphism. In particular, F is a maximal finite subgroup of Γ.
* There is an equivalence of spaces
Γπ^C_p≃∐_F ∈ℳ BW_ΓF.
Point (1) follows as the kernel of Γ→ C_p is torsionfree, so every finite subgroup of Γ will inject into C_p. For (2), observe that if π acts freely on a Γ-space Y, then the map
(π\Y^>1)^C_p→ (π\Y)^C_p
is an equivalence by <ref>.
Thus, since applying (π\-)^C_p to the top row in the pushout square <ref>
yields the map of empty spaces, we get an identification
Γπ^C_p≃(π\∐_F ∈ℳ_N_Γ F^ΓN_Γ F)^C_p≃∐_F ∈ℳ (π\_N_Γ F^ΓN_Γ F)^C_p.
Consider the surjective composition of group homomorphisms N_Γ F ⊂Γ→ C_p. By <ref>, we get
π\_N_Γ F^ΓN_Γ F≃ (π∩ N_Γ F) \N_Γ F.
Now note that, by (1), the only finite subgroups of N_Γ F are e and F.
This implies N_Γ F≃_W_Γ F^N_Γ FEW_Γ F.
Also, the composition π∩ N_Γ F ⊂ N_Γ F → W_Γ F is an isomorphism. Indeed, it is injective, as it has at most finite kernel and π is torsionfree, and it is surjective as N_Γ F is generated by F and π∩ N_Γ F, and F maps to zero in W_Γ F.
We thus obtain
(π∩ N_Γ F) \N_Γ F≃ (π∩ N_Γ F) \^W_Γ F_N_ΓFEW_Γ F≃_C_p^e BW_Γ F,
the second equivalence being an instance of <ref> for G=N_Γ F, Q= W_Γ F and N = N_Γ F ∩π.
This finishes the proof of the second assertion.
In the situation of <ref>, for each x *→Γπ^C_p there is a pullback
T[r, "a"] [d, "p"] Γπ^C_p[d, "j"]
*[r, "b"] Γπ
of C_p-spaces where T is a disjoint union of C_p-orbits and has exactly one fixed point.
Furthermore, x ≃ a s where s *→T denotes the section coming from the fixed point of T.
Note that the π-actions on Γ as well as on Γ^>1 are free, so that by <cit.> we have a cartesian square
Γ^>1[r] [d] [dr, phantom, near start, "⌟"] ^C_p_ΓΓπ^C_p[d, "j"]
Γ[r] ^C_p_ΓΓπ.
Now, recallin <ref>, we have Γπ^C_p≃π\Γ^>1.
The point x gives rise to a map of Γ-spaces h *→^C_p_ΓΓπ^C_p, and so by <cit.>, we get a commuting square
Γ/F[r, "f"] [d] Γ^>1[d]
*[r,"h = π\ f"] ^C_p_ΓΓπ^C_p
for F a subgroup with π\ (Γ/F) ≃ * (so that F is nontrivial) and π∩ F = e.
From <ref> we now learn that the map F ⊂Γ→ C_p is an isomorphism. Hence, F can be used to define a section s C_p →Γ.
We will now construct the following diagram.
Restricting the pullback <ref> along s, we obtain the outer cartesian square of C_p-spaces in
^Γ_C_p∐_F' ∈ℳΓ/N_ΓF' [r, "≃"] [d] [dr, phantom, "(A)"] _C_p^ΓΓ^>1[r] [d] [dr, phantom, "(B)"] _C_p^Γ^C_p_ΓΓπ^C_p[d, "j"] [dr, phantom, "(C)"] [r, "≃"] Γπ^C_p[d]
[r, "≃"] _C_p^ΓΓ[r] _C_p^Γ^C_p_ΓΓπ[r, "≃"] Γπ.
Here, the lower equivalence in square (A) comes from the observation that for any group G, the space E_G becomes equivariantly contractible when restricted to a finite subgroup. The upper equivalence combines <ref> with the observation, that ^Γ_C_p_N_Γ F^ΓN_Γ F→^Γ_C_p_N_Γ F^Γ* = ^Γ_C_pΓ/N_Γ F is an equivalence, which is checked easiest by looking at the map on underlying spaces and C_p-fixed points.
The square labeled (B) is obtained by restricting the pullback <ref> along the section s C_p →Γ. By virtue of s being a section of Γ→ C_p, and as inflation is nothing but restriction along a surjective group homomorphism, we have ^C_p_Γ^C_p_Γ≃𝕀__C_p, explaining the identifications in square (C). It now follows from <ref> below that the upper left corner has exactly one fixed point, as required.
To see the last statement that x≃ as, observe that the section s was chosen to identify C_p with a specific finite subgroup F ⊂Γ having the property that the composition
*^Γ_C_pΓ/F^Γ_C_pΓ^>1→Γπ^C_p
is the point x by <ref>.
Let Γ be a group for which each nontrivial finite subgroup is contained in a unique maximal finite subgroup.
Then for maximal finite subgroups F, F' ⊆Γ we have that F acts freely on Γ / N_Γ F' if F and F' are not conjugate and (Γ / N_Γ F')^F = * if F and F' are conjugate.
Suppose there is f ∈ F \ e and g ∈Γ such that f g N_Γ F' = g N_Γ F'.
Then g^-1 f g ∈ N_Γ F' so the subgroup generated by F' and g^-1 f g is finite.
By maximality of F', we obtain g^-1 f g ∈ F'.
The nontrivial element f lies in both maximal finite subgroups F and g F' g^-1 of Γ which agree by uniqueness.
This shows that the F-action on Γ / N_Γ F' is free if F and F' are not conjugate.
In the other case it suffices to show that (Γ/N_Γ F)^F = *.
One fixed point is clearly given by e N_Γ F.
Suppose that there are f ∈ F\ e and g ∈Γ such that f g N_Γ F = g N_Γ F.
The argument from above shows that F = g F g^-1, i.e. g ∈ N_Γ F.
We now come to the proof of our main characterisation result for genuine virtual Poincaré duality groups coming from C_p–extensions.
Consider an extension of groups
1 →π→Γ→ C_p → 1
where π is a Poincaré duality group, and assume that the Γ-space Γ is compact. Then the following are equivalent.
(*)
* The group Γ is a genuine virtual Poincaré duality group.
* For each nontrivial finite subgroup F ⊂Γ, the Weyl group W_Γ F is a Poincaré duality group.
First of all, note that since π\(-) preserves compact objects – it admits a right adjoint with a further right adjoint – the C_p-space Γπ = π\E_Γ is compact. As such, Γπ^C_p is also compact.
To prove that (1) implies (2), recall that by <ref> we get an equivalence
Γπ^C_p≃∐_F∈ℳ BW_Γ F
where F runs through a set ℳ of representatives of the conjugacy classes of nontrivial finite subgroups. If a space is Poincaré, then each individual component is Poincaré. So we learn that W_Γ F is a Poincaré duality group for each F ∈ℳ. As conjugate subgroups have isomorphic Weyl groups, this implies that the conclusion holds for each nontrivial finite subgroup F.
To prove that (2) implies (1), since Γπ^C_p is compact by the first paragraph, there must only be finitely many components in the decomposition <ref>. By the hypothesis of (2), each component is Poincaré, implying that Γπ^C_p is (nonequivariantly) Poincaré.
We are thus in the situation of <ref>.
To apply it, we have to show that for all x ∈Γπ^C_p, the cofiber of the map D_Γπ^C_p(x)^e → j_! D_Γπ^C_p(x)^e lies in the perfect subcategory ^BC_p_⊂^BC_p generated by induced spectra, where j Γπ^C_p→Γπ denotes the inclusion.
For ease of notation, let us write XΓπ in the following.
From <ref>, we get a cartesian square of C_p-spaces
T[r, "a"][dr,phantom, near start, "⌟"] [d,"p"] X^C_p[d,"j"]
*[r, "b"] [u, bend left, dashed, "s"] X
where T = *∐S and S is a disjoint union of free C_p-orbits where, furthermore, the point x corresponds to the image of the composite a s, where s *→T denotes the section coming from the fixed point of T.
Now observe that the map D_X^C_p(x) → j^* j_! D_X^C_p(x) identifies with the map s^* a^* D_X^C_p→ s^* p^* p_! a^* D_X^C_p≃_T a^* D_X^C_p induced by the unit 𝕀→ p^* p_!.
The decomposition T≃S∐* now provides a splitting
_T a^* D_X^C_p≃ s^* a^* D_X^C_p⊕_S a^* D_X^C_p|_S
and the induced map
s^* a^* D_X^C_p→ s^* a^* D_X^C_p⊕_S a^* D_X^C_p|_S
is an equivalence on the first component by functoriality of colimits.
The C_p-spectrum _S a^* D_X^C_p|_S is induced as S is free.
Together this shows that the cofiber of the map s^*a^* D_X^C_p→ s^*a^* j^*j_! D_X^C_p lies in the subcategory ^BC_p_⊂^BC_p as was to be shown.
Instead of explicitly identifying the map ϵ D_X^C_p(x) → j^* j_! D_X^C_p(x) in the last step of the argument above, one can also finish the proof using the following trick. Using the splitting <ref>, one can reduce to showing that the cofiber of a selfmap f of the invertible spectrum D_X^C_p(x) is induced. As j is an equivalence on C_p-fixed points, one sees that Φ^C_p(ϵ) is an equivalence. This implies that the selfmap f in question also is an equivalence on geometric fixed points. The Burnside congruences show that (f)^e is n-torsion for n congruent to 1 mod p, in particular n coprime to p. But every compact spectrum with C_p-action which is n-torsion for n coprime to p vanishes in (C_p), so it is induced.
§.§ Condition (H)
In this section we will prove an abstract version of Condition (H) for general C_p-Poincaré spaces with discrete fixed points and see how this implies Condition (H) from <ref>.
Essential for this is the theory of singular parts and equivariant fundamental classes, especially the gluing class, introduced in <cit.>.
Let us recall the relevant notions and constructions here.
[<cit.>]
For ξ∈_G(X,), there is a preferred map (X_! ξ)^hG→Σ (X
^>1_! ε^* ξ)_hG.
It is defined as the blue composite in the commuting diagram
(X^>1_!ε^*ξ)_hG[dr, phantom, very near end, "⌜"]
(X^>1_!ε^*ξ)^hG (X^>1_!ε^*ξ)^tG[color =blue]["≃",color =blue]
Σ(X^>1_!ε^*ξ)_hG
(X_!ξ)_hG (X_!ξ)^hG[color =blue][color =red]
(X_!ξ)^tG
Q_hG["≃",color =red][color =red]
Q^hG
Σ(X^>1_!ε^*ξ)_hG,
where the horizontal and vertical sequences are cofiber sequences and where we used the shorthand Q( X^>1_! ε^* ξ→X_! ξ).
By <cit.>, the red and blue composite in <ref> are equivalent up to a sign.
[Gluing classes, <cit.>]
Let X∈_G be a G-Poincaré space with dualising spectrum D_X∈_G(X, ) and collapse map c _G →X_! D_X.
The gluing class of X is defined to be the composite
_G^hGc^hG (X_! D_X)^hG⟶Σ(X^>1_! ε^* D_X)_hG,
where the last map is the blue composite from <ref>.
The linearised gluing class is obtained by postcomposing with the map induced by →
Σ(X^>1_! ε^* D_X)_hG→Σ(X^>1_! ε^* D_X⊗)_hG.
We now specialise to the case G=C_p. Recall from <ref> that for X∈_C_p, we have X^>1≃X^C_p coming from the fact that X^>1≃ X^C_p.
Let X be a C_p-Poincaré space with discrete fixed points such that each component of X^e has positive dimension.
Then the linearised gluing class
→Σ(X^>1_! ε^* D_X⊗)_hC_p⊕_y ∈ X^C_pΣ(D_X(y) ⊗)_hC_p
maps a generator of π_0() ≃ to a generator of π_0 Σ(D_X(y) ⊗)_hC_p≃/p in each summand.
[Invertible C_p-spectra and group (co)homology]
For the proof of <ref>, recall the following facts about the homology of invertible C_p-spectra.
Recall that for an abelian group with G-action M, writing M[d] for the corresponding object in _^BG concentrated in degree d, we have
π_* M[d]_hG≃ H_*-d(G;M), π_* M[d]^hG≃ H^d-*(G;M), π_* M[d]^tG≃H^d-*(G;M).
For E ∈(_C_p) there are integers d^e and d^f such that E^e⊗≃[d^e] after forgetting the C_p-action and Φ^C_pE ⊗≃[d^f]. We write for the trivial C_p-representation and ^σ for the sign C_2-representation.
From <cit.> and elementary group homology computations we obtain <ref>.
In each case, if d^e +1 ≤ d^f, another group homological computation together with <ref> shows that the map
π_d^f(E^e ⊗ℤ)^tC_p→π_d^f-1(E^e ⊗ℤ)_hC_p
is an isomorphism between cyclic groups of order p.
Consider a map f Y→X in _G.
We claim that there is a commuting diagram of functors ^X→^Φ
X_! Φ[r,"c", leftarrow] [d, "≃"]
X_! f_! f^* Φ[d, "≃"]
Φ X_![r, "c", leftarrow]
Φ X_! f_! f^*,
where the vertical maps are the Beck-Chevalley transformations, which are equivalences as the geometric fixed points functor Φ→^Φ from <cit.> preserves parametrised colimits, and the horizontal maps are induced by the adjunction counit c f_! f^* →𝕀.
This follows immediately from <cit.>, again using that Φ preserves parametrised colimits. Importantly, the top map (and hence also the bottom map) is an equivalence by <cit.>.
Notice also that, by naturality of Beck-Chevalley transformations, if we have a decomposition Y = Y_1∐Y_2, the right vertical map in <ref> is compatible with this splitting.
By construction, the gluing class factors through the map
(X^>1_! ε^* D_X⊗)^tC_p→Σ (X^>1_! ε^* D_X⊗)_hC_p
which happens to be an isomorphism on π_0 using <ref> and that X^>1 is a discrete C_p-space (so that d^f =0) with trivial C_p-action.
It thus suffices to show that the gluing class maps to a generator in each summand of
π_0(X^>1_! ε^* D_X⊗)^tC_p≃⊕_y ∈ X^C_pπ_0 (D_X(y) ⊗)^tC_p≃⊕_X^C_p/p.
We have the following commutative diagram.
[d, "≃"] [ddd, bend right, blue] [r, "c_X^C_p"]
X^>1_! D_X^C_p [d, "≃", color=violet]
X^>1_! D_X^C_p [d, "≃", color=violet] [l, "𝕀", color=violet]
Φ^C_p [d] [r, "Φ^C_pc_X"]
Φ^C_p (X_! D_X) [d]
Φ^C_p (X^>1_! ε^* D_X) [l, "≃", color=violet] [d] [r]
Φ^C_p (X^>1_! ε^* D_X ⊗^e_C_p) [d]
^tC_p [r, "c_X^tC_p"]
(X_! D_X)^tC_p
(X^>1_! ε^*D_X)^tC_p [l, "≃", blue] [r, blue]
(X^>1_! ε^* D_X ⊗)^tC_p
^hC_p [u] [r, "c_X^hC_p", blue]
(X_! D_X)^hC_p [u, blue]
The rightmost part is induced by the ring map →.
The violet square is obtained from <ref> applied to f = εX^C_p≃X^>1→X and D_X, using the equivalence Φ^C_p D_X≃ D_X^C_p from <ref> and ε^C_p = 𝕀_X^C_p.
By definition, the blue route recovers the gluing class.
Following the upper route to the same object gives a class having the desired properties. Indeed, on π_0 the upper route reads
ℤ[r, "Δ"]
⊕_X^C_pℤ ⊕_X^C_pℤ[l, equal] [d, "≃", color =red]
⊕_X^C_pℤ[r, "≃"]
⊕_X^C_pℤ[d, "proj"]
⊕_X^C_pℤ/p
where proj refers to the sum of the projection maps →/p. Here, all maps in sight preserve the individual summands of ⊕_X^C_p: the only potentially nonobvious case is the vertical red map, which is dealt with in <ref>.
Recall that Condition (H) from <ref> asks about surjectivity of the upper composite in the diagram
H^Γ_d(Γ, Γ^>1) [r, "∂"] [d, "≃"]
H^Γ_d-1 (Γ^>1) [r] [d, "≃"]
H^F_d-1(*) [d, "≃"]
H^Γ/π_d(π\Γ, π\Γ^>1) [r, "∂"]
H^Γ/π_d-1(π\Γ^>1) [r]
H^Γ/π_d-1(*),
where the right horizontal maps are induced from the the projection onto the F-component in the splitting Γ^>1 = ∐_F' ∈ℳΓ/F'.
We may thus equivalently show surjectivity of the lower horizontal composite.
Denote X = π\Γ, which is C_p-Poincaré by <ref> since in this case, W_ΓF≅{e} and E_Γ is compact by <ref>.
Now by definition, the bottom composite in the diagram above is obtained by postcomposing (X^>1_! ϵ^* D_X⊗→ X_! D_X⊗ )_h C_p→Σ (X^>1_! ϵ^* D_X⊗)_h C_p with projection to a component of X^C_p. Thus, by <ref> and with the alternative description of the gluing class via the red route in <ref>, we obtain the required surjectivity.
|
http://arxiv.org/abs/2409.02752v1 | 20240904143014 | Design and Performance of the Upgraded Mid-InfraRed Spectrometer and Imager (MIRSI) on the NASA Infrared Telescope Facility | [
"Joseph L. Hora",
"David E. Trilling",
"Andy J. Lopez-Oquendo",
"Howard A. Smith",
"Michael Mommert",
"Nicholas Moskovitz",
"Chris Foster",
"Michael S. Connelley",
"Charles Lockhart",
"John T. Rayner",
"Schelte J. Bus",
"Darryl Watanabe",
"Lars Bergknut",
"Morgan Bonnet",
"Alan Tokunaga"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.EP"
] |
0000-0002-5599-4650]Joseph L. Hora
Center for Astrophysics | Harvard & Smithsonian,
60 Garden Street,
Cambridge, MA 02138, USA
0000-0003-4580-3790]David E. Trilling
Department of Astronomy and Planetary Science, Northern Arizona University, Flagstaff, AZ 86011, USA
0000-0002-2601-6954]Andy J. López-Oquendo
Department of Astronomy and Planetary Science, Northern Arizona University, Flagstaff, AZ 86011, USA
Center for Astrophysics | Harvard & Smithsonian,
60 Garden Street,
Cambridge, MA 02138, USA
0000-0002-8132-778X]Michael Mommert
Stuttgart University of Applied Sciences, Stuttgart, Germany
Department of Astronomy and Planetary Science, Northern Arizona University, Flagstaff, AZ 86011, USA
Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001, USA
0000-0001-6765-6336]Nicholas Moskovitz
Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001, USA
Infrared Laboratories, Inc., 1808 E. 17th St, Tucson, Arizona, 85719, USA
0000-0002-8293-1428]Michael S. Connelley
University of Hawai`i, 640 A'ohōkū Place, Hilo, HI 96720, USA
University of Hawai`i, 640 A'ohōkū Place, Hilo, HI 96720, USA
0000-0002-3165-159X]John T. Rayner
University of Hawai`i, 2680 Woodlawn Dr., Honolulu, HI 96822, USA
0000-0003-4191-6536]Schelte J. Bus
University of Hawai`i, 640 A'ohōkū Place, Hilo, HI 96720, USA
University of Hawai`i, 640 A'ohōkū Place, Hilo, HI 96720, USA
University of Hawai`i, 640 A'ohōkū Place, Hilo, HI 96720, USA
University of Hawai`i, 640 A'ohōkū Place, Hilo, HI 96720, USA
0000-0001-8136-9704]Alan Tokunaga
University of Hawai`i, 2680 Woodlawn Dr., Honolulu, HI 96822, USA
§ ABSTRACT
We describe the new design and current performance of the Mid-InfraRed Spectrometer and Imager (MIRSI) on the NASA Infrared Telescope Facility (IRTF). The system has been converted from a liquid nitrogen/liquid helium cryogen system to one that uses a closed-cycle cooler, which allows it to be kept on the telescope at operating temperature and available for observing on short notice, requiring less effort by the telescope operators and day crew to maintain operating temperature. Several other enhancements have been completed, including new detector readout electronics, an IRTF-style standard instrument user interface, new stepper motor driver electronics, and an optical camera that views the same field as the mid-IR instrument using a cold dichroic mirror, allowing for guiding and/or simultaneous optical imaging. The instrument performance is presented, both with an engineering-grade array used from 2021-2023, and a science-grade array installed in the fall of 2023. Some sample astronomical results are also shown. The upgraded MIRSI is a facility instrument at the IRTF available to all users.
§ INTRODUCTION
The Mid-Infrared Spectrometer and Imager (MIRSI) was developed at Boston University by a team led by Lynne Deutsch <cit.> and was used from 2002 – 2011 on the NASA Infrared Telescope Facility (IRTF). MIRSI was used to make observations in the 2 – 25 wavelength range of asteroids, planets, and comets <cit.>, as well as observations for non-solar system science programs such as photodissociation regions <cit.>, eclipsing binary stars <cit.>, and high mass protostars <cit.>. MIRSI was scheduled for all or part of 425 separate nights during this period, peaking at 46 nights in the 2005A semester. Over 65 publications were based in part on MIRSI observations[For a partial list of publications see <https://cfa.harvard.edu/mirsi/>].
§.§ Instrument Operation Issues
The nominal orientation of the original MIRSI dewar on the telescope was with the window pointed upwards toward the incoming telescope beam and the cryogen cans on their side, extending horizontally away from the optical axis. This allowed for filling the cryogen without removing the instrument from its mount on the telescope, simplifying and speeding up the process. However, this orientation put more stress on the G10 supports that held the cans and radiation shields in place, and eventually these partially failed, allowing the outer shield to make contact with the inner LHe shield and reduced the dewar hold time. In 2010, the system was disassembled and the supports replaced with thicker structures which could properly support the weight of the shields.
Near the end of its original period of use, the instrument became more difficult to maintain due to its aging custom electronics and computer interface and control system, and more expensive to operate because of the high cost of supplying liquid helium (LHe) on Maunakea. The latter factor led to the instrument being used infrequently in discrete blocks of time to minimize the number of system cooldowns and LHe usage. These runs had to be planned well in advance to arrange for adequate LHe, and several times planned observing runs had to be cancelled due to delays in helium delivery by the supplier. During the last observing run, the detector array was damaged and rendered non-functional due to operating at higher than nominal temperatures. The LHe cryogen had boiled off faster than expected, and the original electronics did not have adequate safeguards to automatically turn off the array power if the operating temperature limits were exceeded. In addition, one of the readout electronics boards had failed, and replacement boards were not readily available.
§.§ MIRSI Upgrade Science Goals
In 2014 we proposed to the NASA Near-Earth Object Observations (NEOO) program to determine the diameters and albedos of 750 near-earth objects (NEOs) with observations from the IRTF with MIRSI. Most NEOs are discovered in ground-based optical surveys, and their diameters are very uncertain due to their unknown albedos which can vary from 0.02 to 0.42 or higher, depending on the asteroid class <cit.>. Previous thermal IR observations with Spitzer/IRAC <cit.> and the NEOWISE mission <cit.> had proven to be very effective in quickly characterizing NEOs. However, Spitzer had moved well away from the earth at that time, and NEOWISE's survey pattern and sensitivity often did not allow it to detect small newly-discovered NEOs. Now that Spitzer and NEOWISE are no longer in operation, the IRTF is even more essential to fill this need.
It was clear at that point that MIRSI needed major work in order to become operational again, and that it could not operate in the same way as before with liquid cryogens. For the proposed NEO program, we would need to observe for a few hours every week or two, and wanted to focus on recently discovered NEOs that were still close enough to the earth to detect in the mid-IR. Some of these observations would require target of opportunity-style scheduling where we would need to observe high-priority objects within a matter of hours of their discovery. Clearly this was not economically or logistically feasible with liquid cryogens, so MIRSI would have to be converted to a closed-cycle cooler system. In addition, simultaneous optical measurements would be important in helping to constrain the albedo and diameter determination, and also allow for light curve measurements and guiding on the source which might not be visible in individual mid-IR images.
We describe in this paper the design and implementation of the modifications performed to the MIRSI system to upgrade the cryogenic system to a closed-cycle cooler, and its performance as measured on the IRTF using an engineering-grade array from 2021-2023. We show some sample science images from the new system to demonstrate some of MIRSI's capabilities. In a companion paper <cit.>, we present results from the first observations of NEOs with the upgraded system.
§ MIRSI UPGRADES
§.§ MIRSI description
MIRSI was designed to observe at the IRTF at high spatial resolution with background-limited sensitivity. The system was optimized to acquire images and low-resolution grism spectra within the 8–14 and 17–26 atmospheric windows. It also has K and M filters available in the 2 - 5 range. A full description of the original MIRSI system can be found in <cit.> and <cit.>. The optical layout and inner dewar with the optics components, mechanisms, and array mount are shown in Figure <ref>. This section is mostly unchanged in the new system, with the exception of a new array mount and connection to a cold finger from the second stage of the cooler, and replacing the filter position sensor mechanical switches with Hall effect sensors. The all-reflective design using gold-coated diamond-turned aluminum mirrors provides high throughput and excellent optical quality over the full wavelength range.
A major change in the configuration of MIRSI is that a dichroic fold mirror has been added to reflect the IR light into the camera optics, allowing the light shortward of 1.5 to pass through to an optical camera (described in <ref>). The CAD renderings in Figures <ref> and <ref> show the system in its nominal configuration with the telescope at zenith, with the optical plate horizontal and the optical components mounted below. The telescope beam enters through a ZnSe window which is mounted on an extension to the main optics box (see Figure <ref>). After entering the dewar, the beam encounters a dichroic mirror mounted at a 45 angle which reflects the IR light into MIRSI and passes the optical light shortward of ∼1 into the optical camera through the exit window marked 3 in the figure (see also the dichroic transmission/reflection curves in Appendix A). The exit window transmits in the optical and is opaque in the mid-IR, so it will emit thermal radiation into the region under the dichroic mirror. The dichroic does not transmit mid-infrared light, however, and it is cooled to the first stage radiation shield temperature (∼47K), therefore the exit window does not contribute significantly to the background that the MIRSI detector sees, which is dominated by thermal emission from the telescope and sky.
The original MIRSI window was made from KRS-5, which has transmission over the 0.6 - 30 range. However, after some time of use at the IRTF, the window degraded and “clouded” over, having a frosted glass appearance. A spare window was substituted in, which in turn also degraded in the same way after a period of use. We attempted to have the
windows repolished, but they still had the clouded appearance, indicating it was not a surface effect. We decided to switch to a ZnSe window, which is a more durable material and has good transmission in the optical and 8-13 window, although it cuts off near 20 . This would enable MIRSI to do most of the science planned, and be more reliable until we could obtain a solution with KRS-5 that would not degrade.
In MIRSI, the telescope focus is at the aperture wheel positioned near the entrance, where an open aperture or slits can be selected. A fold mirror (M1) directs the light into the collimating mirror (M2) and then through the filters and pupil stop (see Figure <ref>). The filter wheels and pupil stop are enclosed in a box that, along with other baffles, prevents stray light and out-of-band radiation from falling onto the camera mirrors M3 and M4 that reimage the focal plane onto the detector. The available filters are listed in Table <ref>, along with the measured sensitivity and instrument parameters used for each band. The transmission curves of the optical elements are shown in Appendix <ref>, Figures <ref> – <ref>. The use of cryogenic stepper motors that drive the motion via a gear on the circumference of the wheels eliminates the need for mechanical feedthroughs and thermal isolation of the motor shafts and ensures a reliable system which minimizes light leaks.
MIRSI uses a Si:As blocked impurity band (BIB) detector array (320×240 pixels) developed by Raytheon <cit.>. The array is connected to a CMOS readout integrated circuit through indium bump bonds and mounted to a leadless chip carrier. MIRSI is configured to read out the detector in 16-channel mode, which corresponds to sets of 20 adjacent pixel columns on the array for each readout channel.
§.§ Conversion to a cryocooler-based system
The upgrade concept relied on the relatively simple design of the MIRSI dewar, where all of the critical components of the system are attached to the optical mounting plate, and contained within the main rectangular box of the dewar. This was easily separated from the cylindrical section containing the cryogen reservoirs and could be attached to a new upper section. The new upper section contains just the cryocooler cold head and a mounting plate that is supported by low thermal conductivity connections to the outer dewar shell. All of the optics, filters, motors, and detector array use their existing mounts and electrical connections to the outside of the dewar. Cryocooler vibration is a concern, but modern systems have lower vibration levels than previous models, and with careful isolation between the cold head and the instrument optics and detector, systems of this type have been demonstrated to have sufficiently low vibration so as to not degrade image quality <cit.>. We have subsequently verified the low vibration by achieving the same image quality with the upgraded system compared to the liquid cryogen-cooled system (see 3 below). The cryostat upgrade was performed by Infrared Laboratories[<https://www.irlabs.com/>] of Tucson, who built the original MIRSI dewar. The cold head used is a Sumitomo model RDK-415D2 two-stage Gifford-McMahon Refrigerator, with a model F-70L water-cooled helium compressor.
The cold head, which is black-anodized on the outside as depicted in Figures <ref> – <ref>, is mounted on a new upper section of the dewar that replaced the cryogen cans. Figure <ref> shows the thermal design, with the figure on the left showing the parts of the dewar connected to the first stage of the cryocooler (the outer radiation shields), and the figure on the right showing the connection from the second stage to the detector and internal optics and radiation shields. Both of these stages are thermally connected via braided copper straps that act to mechanically isolate the optics and detector from the cryocooler head to minimize vibrations. Figure <ref> shows a view into the new cold head section, showing the first stage connections to the copper straps. Figure <ref> shows the full instrument mounted on the IRTF, with all of the compressor lines and electronics cabling attached.
To achieve the required array operating temperature, an isolated thermal connection was established that connects the second stage to the array mount (also via a braided copper strap) independent of the connection to the optics and inner radiation shields. The array is also thermally and electrically isolated from the optics plate using a G10 spacer. In order to electrically isolate the array and the array stage from the cryocooler and dewar case, a thin diamond sheet pressed between copper plates is used for high thermal conductivity. The stage can reach temperatures below 5 K, and can be thermally stabilized to within a few mK during operation using a heater resistor on the stage and an external Lakeshore controller to the nominal 6 K operating temperature.
The system cooldown time is approximately 7 hours from room temperature until the first stage and outer shield reach equilibrium temperature (∼ 47K for the outer shield), and a total of ∼13.5 hours for the detector stage and components on the optical plate to reach equilibrium (approximately 5K and 8K, respectively) with the detector powered off.
The use of the cryocooler as opposed to liquid cryogens has simplified operation at the telescope, but occasional maintenance is still required. Specifically, parts in the cold head will degrade over time and eventually it needs to be replaced. At the IRTF, this is done every two years, and has been done once since the upgraded MIRSI was delivered from IR Labs.
§.§ Electronics and Mechanism Upgrades
The MIRSI detector readout electronics were replaced by a system based on the same array controller and readout electronics used by other IRTF instruments. This solved the intermittent electronics issues present in the original MIRSI electronics, and makes it easier to maintain and repair when problems develop on the summit. The new electronics will be described in a separate paper.
The cryogenic stepper motor drivers were also replaced, including using Hall effect sensors instead of microswitches for improved reliability of wheel position sensing and accurate filter selection. The internal dewar wiring was also replaced due to the fragile nature of the original system.
§.§ MIRSI Optical Camera (MOC)
MIRSI is paired with a copy of the MIT Optical Rapid Imaging System <cit.>, a fast readout optical camera used on the SpeX instrument at the IRTF. The new optical system, called the MIRSI Optical Camera (MOC), uses an Andor iXon 897 EM-CCD camera with a 512x512 detector and 16 micron pixels. A USB cable connects the detector electronics to a computer located on the bottom of the telescope that runs the software for controlling the camera. The detector is thermoelectrically cooled to -60 C to minimize dark current. The field of view is 1 arcminute, and the pixel scale is 012/pixel. The camera optics consist of a field lens doublet, making a 5 mm diameter pupil image at a pupil stop, and a camera lens doublet creating an f/9.4 telecentric beam onto the detector. The MOC has a filter wheel with SDSS r', i', and z' filters, plus ND1, ND2, ND3, and ND4 neutral density filters to enable observing and guiding on optically bright targets. The r', i', z' magnitude zero points are 24.4, 24.1, and 23.5, respectively.
The optical beam passes through the dichroic which will contribute to some astigmatism in the MOC image, but the effect is minimal because of the slow telescope beam. The MOC optical design provides a spot size with 50% encircled energy of 012, and the typical optical seeing is in the range of 06 - 1, so the seeing dominates the image quality. The MOC and MIRSI mid-IR arrays have been placed as close as possible to being at the same focus position. Because the effects of atmospheric seeing are greater in the optical, we typically focus using the mid-IR image to achieve the best image quality.
The MOC has a user interface that allows one to either take single images or a series of consecutive images in guide mode. The user can set parameters such as the integration time, size of the guide window, guiding gain, and mode. The guider images can be saved if desired. A full description is given in the MOC user manual[<https://irtfweb.ifa.hawaii.edu/ moc/user/>]. When beamswitching to keep both A and B beam on the IR array, we typically guide in both beams as well. When guiding, the MOC software measures the source position relative to the center of the guiding box in the optical image, and sends commands to the telescope control system to center the source in the guide box at the frame rate of the guider images. After performing a beam switch or offset, the guider software calculates where the guide box should be on the optical array in the new position, and guides relative to that location (the “commanded offset”). The commanded offsets are recorded in the IR image headers, so when reducing the data and constructing mosaics, the individual frames can be registered using the commanded offsets to high accuracy, even when the source is not visible in the IR images (see <ref>).
The MOC enhances MIRSI’s capabilities by allowing the observer to place objects accurately on the IR field, and to guide on the optical emission from the science target or other nearby object in the field. This is especially important for spectroscopy since guiding errors result in the object moving off the slit, and without the capability to guide, multiple re-acquisition and/or offsets from nearby sources would be necessary. With the guide camera mounted to the same instrument, guiding errors due to different flexure of the science instrument and guide camera are minimized. The optical guiding is also important for observing faint targets which are not visible in individual MIRSI frames, as well as keeping a source centered on the slit in spectroscopic mode.
In addition to guiding, the MOC allows observers to perform simultaneous optical/IR photometry, a capability critical for thermal modeling of NEOs and other asteroids which requires near-simultaneous accurate optical and thermal fluxes to minimize uncertainties due to rotation and possible errors in the cataloged “absolute magnitude" (H) values (the magnitude of an asteroid at zero phase angle and at unit heliocentric and geocentric distances[<https://cneos.jpl.nasa.gov/glossary/>]).
§.§ Observing and Data Reduction Techniques
In ground-based mid-IR observing, one typically employs secondary chopping and telescope beamswitching to reduce sky noise and subtract the thermal background due to the sky and telescope. The minimum chop frequency required is in the range of 0.5 – 1 Hz <cit.>. The chopping secondary is
no longer available at the IRTF, however they are working on a new secondary mount that will have chopping capability. Therefore, at the moment only nod and user offsets are possible for beamswitching. For point sources in uncrowded fields such as the NEO observations, we keep both beams on the array to maximize sensitivity and use both the A and B beam on-source data. The array is aligned with its long axis pointed E-W, and a typical nod throw is 6 W and 15 N. We also offset the telescope between nod pairs when taking a set of frames, so that we can correct for bad pixels and average out any residual pixel gain differences to produce the mosaics.
Sample observations of a ∼2 Jy point source (an NEO) are shown in Figure <ref>. The upper left panel shows a raw image in the A beam, which is the coadd of 300 readouts using 0.0123 second frame time. During these observations the leftmost readout channel was non-operational, so it appears dark in this image. Each of the 16 channels reads 20 pixel columns of the array, which one can see in the image have slightly different offsets. The E-W component of the nod throw ensures that the source falls on different output channels of the readout, so that any bright source artifacts that occur along the array columns are not present in both A and B beams.
The mean ADU level is ∼49,800 per frame. Some vignetting is visible in the upper part of the frame, highest in the upper right corner.
The upper right panel of Figure <ref> shows the result of subtracting consecutive A and B beam frames. The source is visible as a positive (white) source near the center, and a negative (dark) source above and to the left. Some bad pixel groups are visible at various locations around the image. The peak level of the source is ∼950 ADU in each frame. Offsets of ∼200 ADU are visible between the readout channels, and some horizontal stripes are seen near the top of the frame. The standard deviation of the pixels in the background is ∼300 ADU. In these observations the flux conversion factor was 1.10E-4 Jy/ADU.
The lower right panel of Figure <ref> shows the same image after subtracting column-wise and row-wise medians from the frame, which reduces the offsets between readout columns and the horizontal striping.
The lower left panel shows the final mosaic made from 27 beamswitch pairs (54 frames total), cycling through 10 unique offset positions and using MOC guiding. The frames were aligned using the commanded offsets and beamswitch
vector. Both the positive and negative images of the nod pair are used when making the mosaic: each difference image is multiplied by -1 to make a second image with the negative source (from the B beam position) positive. Those images are then shifted to align them with the A-beam source positions, and the images are averaged with sigma clipping. This results in a final mosaic that has a central positive source that combines all beams, and negative residuals showing up both above left and below right of the source which are the combination of blank sky and the negative source from half of the frames. Those residuals are ignored and the photometry is performed on the central positive source.
We have a basic reduction pipeline for the MIRSI IR data that consists of three python programs. The first program reads in the raw frames, performs the A-B subtraction, column and row median corrections, and writes the difference images similar to the lower right panel of Figure <ref>. The second program reads in these difference frames and constructs a mosaic such as in the lower left panel of Figure <ref>. There are several options for how to perform the relative shifts between frames to construct the mosaics: one can either use the commanded offsets if guiding with MOC, determine the offsets by cross-correlating on the individual frames (requires that the source is bright enough to be visible in each frame), or have the user interactively choose the location in each frame to center on. The third program performs aperture photometry on the frames or mosaics. The pipeline is included with the supplementary materials of this paper, and the repository for the current version of the pipeline is on github[<https://github.com/jhora99/MIRSI>].
§ PERFORMANCE WITH ENGINEERING GRADE ARRAY: 2021 - 2023
We had available to us an "engineering grade" array from the original MIRSI development that we first installed in the upgraded MIRSI for testing and initial commissioning of the instrument. This array has some cosmetic and uniformity issues that caused it to be classified as engineering grade, but otherwise has a similar sensitivity to the science-grade detector.
The upgraded MIRSI was made available to general observers on the IRTF for the 2022A-2023A semesters, where it was scheduled for use for parts of 25-28 nights per semester. Unfortunately, MIRSI's current sensitivity is significantly worse than the original MIRSI as detailed below, approximately a factor of 10 less sensitive in all bands. However, it was still possible to execute several science programs, including observations of planets and asteroids. Some examples are given in the following sections.
§.§ IR Sensitivity
The sensitivity in each filter measured with the engineering-grade array is shown in Table <ref>. The values in the table were calculated based on observations of α Tau obtained on the nights of 2021/10/01 and 2021/10/02 at an airmass of ∼1.02, so no airmass correction was done. The ITIME is the on-chip integration time, and the 1σ sensitivity in 10 minutes is based on the actual elapsed time spent observing the star, including overheads for telescope nodding and array readouts. The measurement in each filter used 20 beamswitched/dithered frames. The beamswitch and dithering offsets were small enough to keep the star on the array at all times, to maximize the on-source time and sensitivity of the observations. The point source sensitivity numbers are based on the per pixel noise and the equivalent noise area <cit.> for each wavelength, assuming the diffraction-limited PSF size and 05 seeing.
§.§ Guiding Performance with MOC
The performance and operation of the MOC is similar to that of the MORIS system. We have demonstrated that the MOC can successfully guide on point sources as faint as V-band magnitude (Vega) ∼17 with exposure times of a few seconds. Typical dither dwell times for the IR observations are on the order of 20 seconds, so this allows for several corrections at each position for faint sources. For brighter sources, one can use integration times of 1 second or less, although the most common pointing error is a slow drift that would move the telescope by a fraction of an arcsec in a minute or more of time.
Figure <ref> shows the accuracy of how images can be aligned with guiding and blind stacking of the images. The images in the top row show a single image of α Tau and a mosaic image where the frames have been shifted and coadded according to the telescope offsets. The FWHM of the images are nearly identical. One does not normally need to guide for bright calibration stars since one can align the images based on the IR frames themselves, but this mode is especially important for cases where the target of interest is not visible in the individual IR frames, such as the NEO project discussed in <ref> below.
In the bottom row, a similar set is shown for the star HR1457 obtained under better seeing conditions. The mosaic calculated from aligning the individual frames based on the source centroid is almost identical to the mosaic using the commanded offsets when guiding with MOC.
rrrrrrr
MIRSI Sensitivity (Engineering Array, October 2021)
0pt
Assumed per pixel point source
α Tau Flux ITIME 1σ, 10 min 1σ, 10 min
Filter (Jy) Jy/ADU Coadds (sec) (mJy) (mJy)
2.2 8146 0.63925 100 0.005 42.1 214
4.9 2081 0.19871 100 0.005 20.6 107
7.7 942 0.20587 50 0.007 89.2 556
8.7 763 0.1381 50 0.005 33.2 222
9.8 647 0.09425 50 0.007 31.0 224
10.57 555 0.02542 100 0.005 4.6 35
11.7 481 0.03363 50 0.015 10.1 84
12.28 438 0.26359 100 0.01 28.2 242
12.5 424 0.0352 50 0.015 16.2 141
Q0 254 0.87503 500 0.06 473.0 5310
Q1 238 0.64478 500 0.06 376.0 4385
Q2 229 0.88396 500 0.06 558.9 6637
20.7 180 0.77818 200 0.005 98.4 1331
§.§ Examples of Astronomical Results
§.§.§ Planets
We show sample images taken with the MIRSI engineering grade array in Figures <ref> – <ref>. These were taken with the N-band filter and are mosaics composed of several different dither positions. In the case of Jupiter, the object was moved off the array with beamswitch commands to obtain the sky reference images. For Saturn, the object was moved to a different position on the array in order to minimize the time needed to reach the signal-to-noise necessary to detect the ring emission.
§.§.§ NEOs
<cit.> began a program of NEO observations with the engineering array version of MIRSI in 2021, following the goals of the MIRSI upgrade proposal described in <ref>. In our simultaneous <cit.> work, we present a detailed description of the MIRSI-NEO program along with initial results from the 2022-2024 survey. See Figure <ref> for an example of an NEO observation. We also utilized MIRSI as part of the International Asteroid Warning Network rapid response characterization campaign focused on the newly discovered NEO 2023 DZ2 <cit.> to estimate the object's diameter and albedo. This activity demonstrated the utility of MIRSI at the IRTF to characterize a newly discovered asteroid on short notice to assess its potential impact threat.
§.§.§ Star Formation Regions
Figure <ref> shows a color image of the BN/KL region in Orion at 8.7, 11.7, and 12.5 , obtained on 2021/10/01 using the engineering-grade array. The mosaic at each wavelength combined 20 dithered/beamswitched frames taken with ITIME=0.015 s and 200 coadds. The frames were aligned according to the peak emission point in the image. The color scaling is set to enhance the lower-level emission, so the bright BN object is saturated in all colors and appears white in the figure.
§.§ Science-Grade Array
MIRSI was removed from the telescope in the fall of 2023 to switch detector arrays. A science-grade array that had been used in a now-decommissioned instrument was loaned to the IRTF for use in MIRSI. The new array has been used since the start of the 2024A semester. The array has fewer bad pixels and has better uniformity of pixel response (see Figure <ref>), but the overall sensitivity is not significantly changed. This indicates that some other change in the system has taken place compared to the original MIRSI system, for example a degradation of optical component(s), an
alignment issue, or a non-optimal detector readout scheme. These possibilities are currently under investigation.
§ SUMMARY AND FUTURE WORK
MIRSI's cryogenic system has been upgraded and the instrument is back in operation at the IRTF, available to observers since the spring 2022 semester. The instrument is mounted on the telescope Multiple Instrument Mount along with the other facility instruments and can be swapped in quickly as needed. The cryocooler system keeps MIRSI at operating temperature with much reduced operational cost and complexity compared to the liquid cryogen-based system.
The MIRSI instrument control program has been replaced by an IRTF-standard graphical user interface that will seem very familiar to users of other IRTF facility instruments. The three mechanisms that control the aperture wheel and the two filter wheels are operated with simple drop-down menus. The frame time, number of cycles and coadds, and beam pattern are controlled from the main menu. Sequences of observations can be programmed using macros, which can also configure the instrument and set up all of the observing parameters.
The addition of the MOC has made possible many programs where IR sources are not available to guide or align frames. This has been used extensively in the NEO observing program, but any application where optical guide objects are available but the IR source is too faint or extended to guide on will benefit from this capability. The relative flexure between MOC and MIRSI is sufficiently low that one can use the commanded offset positions to align frames with little effect on the image quality of the IR mosaics.
As previously mentioned, a chopping secondary is planned for the IRTF, which will improve sensitivity by reducing the noise from fluctuations in the sky background. Currently we are trying to determine the reason(s) behind the lower sensitivity the current MIRSI system has compared to its original performance <cit.>. Some of the lower sensitivity
is due to higher noise because we are not using a chopping secondary,
but we estimate this is a factor of ∼2 effect and does not account
for all of the reduced sensitivity. Updates to MIRSI status and sensitivity are posted to the IRTF web site[<https://irtfweb.ifa.hawaii.edu/
>] before proposal submission deadlines each semester.
We acknowledge the significant cultural role and reverence
that the summit of Maunakea has within the indigenous Hawaiian
community and that we are most fortunate to have the opportunity
to conduct observations from this mountain. The Infrared Telescope Facility is operated by the University of Hawaii under contract 80HQTR24DA010 with the National Aeronautics and Space Administration.
This work was partially funded by a grant from the NASA Solar System Observations/NEOO program (NNX15AF81G). The original MIRSI instrument was funded by NSF grant 9876656 and support from Boston University.
Astropy <cit.>,
Matplotlib <cit.>
§ TRANSMISSION OF OPTICAL ELEMENTS
Plots of the transmission and reflection of the various optical elements are shown in Figures <ref> – <ref>. The dewar window and dichroic scans were provided by the manufacturers. The MIRSI filter transmission curves were also measured at the IRTF at room temperature using a Thermo Nicolet FTIR spectrometer.
The mirrors in the MIRSI optics are all gold-coated, the flat mirror is fused silica and the elements with power (M2-M4) are diamond-turned aluminum. Each of these elements have reflectivities of ≥98% over MIRSI's operating range.
The transmission and reflectance data plotted in these figures are available as ASCII-format files to download from the electronic version of this paper.
§ MIRSI GRAPHICAL USER INTERFACE (GUI)
A screenshot of the MIRSI GUI is shown in Figure <ref>. It is similar to the other IRTF instrument control programs, which makes it easier for experienced IRTF users to use and for the telescope staff to maintain. The user can change the camera parameters of Itime (on-chip integration time) and Coadd (number of frames to add together before the file is saved). The Itime is set so that the background + object flux will be lower than the saturation level. The user can change filters and aperture wheel position by clicking on the icons in the lower left part of the window. The user can enter information about the object and observers, and specify the file name for the output images.
aasjournal
|
http://arxiv.org/abs/2409.02165v1 | 20240903180001 | First- and second-order quantum phase transitions in the long-range unfrustrated antiferromagnetic Ising chain | [
"Víctor Herráiz-López",
"Sebastián Roca-Jerat",
"Manuel Gallego",
"Ramón Ferrández",
"Jesús Carrete",
"David Zueco",
"Juan Román-Roche"
] | quant-ph | [
"quant-ph",
"cond-mat.stat-mech"
] |
These two authors contributed equally
Departamento de Física Aplicada, Universidad de Zaragoza, Zaragoza 50009, Spain
These two authors contributed equally
Instituto de Nanociencia y Materiales de Aragón (INMA), CSIC-Universidad de Zaragoza, Zaragoza 50009, Spain
Departamento de Física de la Materia Condensada, Universidad de Zaragoza, Zaragoza 50009, Spain
Instituto de Nanociencia y Materiales de Aragón (INMA), CSIC-Universidad de Zaragoza, Zaragoza 50009, Spain
Departamento de Física Teórica, Universidad de Zaragoza, Zaragoza 50009, Spain
Instituto de Nanociencia y Materiales de Aragón (INMA), CSIC-Universidad de Zaragoza, Zaragoza 50009, Spain
Departamento de Física de la Materia Condensada, Universidad de Zaragoza, Zaragoza 50009, Spain
Instituto de Nanociencia y Materiales de Aragón (INMA), CSIC-Universidad de Zaragoza, Zaragoza 50009, Spain
Departamento de Física de la Materia Condensada, Universidad de Zaragoza, Zaragoza 50009, Spain
Instituto de Nanociencia y Materiales de Aragón (INMA), CSIC-Universidad de Zaragoza, Zaragoza 50009, Spain
Departamento de Física de la Materia Condensada, Universidad de Zaragoza, Zaragoza 50009, Spain
Instituto de Nanociencia y Materiales de Aragón (INMA), CSIC-Universidad de Zaragoza, Zaragoza 50009, Spain
Departamento de Física de la Materia Condensada, Universidad de Zaragoza, Zaragoza 50009, Spain
§ ABSTRACT
We study the ground-state phase diagram of an unfrustrated antiferromagnetic Ising chain with longitudinal and transverse fields in the full range of interactions: from all-to-all to nearest-neighbors. First, we solve the model analytically in the strong long-range regime, confirming in the process that a mean-field treatment is exact for this model. We compute the order parameter and the correlations and show that the model exhibits a tricritical point where the phase transition changes from first to second order. This is in contrast with the nearest-neighbor limit where the phase transition is known to be second order. To understand how the order of the phase transition changes from one limit to the other, we tackle the analytically-intractable interaction ranges numerically, using a variational quantum Monte Carlo method with a neural-network-based ansatz, the visual transformer. We show how the first-order phase transition shrinks with decreasing interaction range and establish approximate boundaries in the interaction range for which the first-order phase transition is present. Finally, we establish that the key ingredient to stabilize a first-order phase transition and a tricritical point is the presence of ferromagnetic interactions between spins of the same sublattice on top of antiferromagnetic interactions between spins of different sublattices. Tunable-range unfrustrated antiferromagnetic interactions are just one way to implement such staggered interactions.
First- and second-order quantum phase transitions in the long-range unfrustrated antiferromagnetic Ising chain
Juan Román-Roche
September 9, 2024
==============================================================================================================
§ INTRODUCTION
Recent advances in cold-atom simulators have led to renewed interest in systems with long-range interactions <cit.>. Long-range interactions decay as r^-α, with r the distance between interacting degrees of freedom <cit.>. Despite being ubiquitous in nature, e.g. dipolar or gravitational interactions, these systems have been less studied than their nearest-neighbor counterparts because long-range interactions complicate numerical and analytical treatments. Nevertheless, some results exist that showcase remarkable differences in behavior between long- and short-range models. Some examples are the spontaneous breaking of continuous symmetries in one dimension <cit.>, the existence of an area law of entanglement <cit.>, the existence of Majorana modes <cit.>, the spreading of correlations <cit.> and topological properties <cit.>.
Notably, long-range interactions have been shown to induce first-order phase transitions in classical dipolar gases <cit.> and (quantum) Bose-Hubbard <cit.> and XX <cit.> models.
Quantum phase transitions constitute one of the fundamental phenomena of condensed matter physics. They manifest as nonanaliticities in the ground state energy as some critical parameter is varied <cit.>. Leaving aside the more exotic topological phase transitions <cit.>, quantum phase transitions, like thermal phase transitions, can be first or second order, depending on whether the order parameter or its derivatives are discontinuous at the critical point.
Just as the Ising model is the paradigmatic model in classical statistical mechanics, its quantum analogue, the (nearest-neighbour) transverse field Ising chain (TFIC), is the paradigmatic example of a solvable model featuring a quantum phase transition <cit.>. The TFIC is also a starting point to devise more sophisticated models.
To elucidate the effect of long-range interactions on the order of phase transitions, it is convenient to construct a minimal model. The trivial long-range generalization of the TFIC is known to feature only a second-order phase transition <cit.>. In fact, there is no phase transition at all for antiferromagnetic interactions at extremely long ranges. At the same time, it has been shown that the Ising model with antiferromagnetic nearest-neighbor interactions and ferromagnetic next-nearest-neighbor interactions presents a tricritical point (TP) where the critical line changes from first to second order <cit.>.
To shed more light on this issue, we have generalized the antiferromagnetic Ising chain to feature tunable-range unfrustrated antiferromagnetic interactions and studied its ground-state phase diagram analytically in the strong long-range regime using the technique described in Ref. . We find that the phase diagrams of the nearest-neighbor and strong long-range models differ significantly. The nearest-neighbour model presents a second-order critical line between an antiferromagnetic and a paramagnetic phase, with a singular point -for vanishing transverse field, where the model becomes classical- where the phase transition becomes first order <cit.>. In contrast, our results show that in the strong long-range model, a significant portion of the critical line between the antiferromagnetic and paramagnetic phases becomes first order. The critical line changes order at a tricritical point that occurs at non-zero transverse field.
To understand how the critical line morphs from one limit to the other, we employ a variational quantum Monte Carlo method with a neural-network-based ansatz, the visual transformer <cit.>, in the full range of interactions 0 < α < ∞, with α→∞ corresponding to the nearest-neighbor limit. These numerical results show how the position of the tricritical point smoothly moves from zero to a finite transverse field. Remarkably, the first-order phase transition survives well into the weak long-range regime (α > d with d the dimension of the lattice, and specifically d=1 for the chain). Finally, we check whether the first-order phase transition is present if the ferromagnetic intrasublattice interactions are removed. In this case, we find that the full critical line is second order. This indicates that the key to stabilize a first-order phase transition is the simultaneous presence of antiferromagnetic intersublattice and ferromagnetic intrasublattice interactions. Tunable-range antiferromagnetic interactions are just one way to implement and tune these staggered interactions.
The rest of the paper is organized as follows. In Sec. <ref> we present the Hamiltonian for the tunable-range unfrustrated antiferromagnetic Ising chain. Section <ref> is dedicated to the exact solution of the model in the strong long-range regime, with a characterization of the ground state phase diagram through the order parameter and the correlations. The numerical results are presented in Sec. <ref>. We end the paper with a discussion of the results and the source of the first-order phase transition in Sec. <ref> and provide technical details and complementary results in the appendices.
§ THE MODEL
We consider a one-dimensional spin chain (d=1) with tunable-range Ising interactions and subject to transverse and longitudinal fields. The Hamiltonian reads
H = -ω_z ∑_i=1^N S_i^z - ω_x ∑_i=1^N S_i^x - ∑_i,j=1^N J_ijS_i^xS_j^x ,
where S_i^x,z are spin-s operators acting on site i. The interactions are J_ij = (-1)^i+jΓJ̃(𝐫_ij)/Ñ, with
J̃(𝐫_ij)=
b if 𝐫_ij=0
|𝐫_i j|^-α otherwise
where the distance 𝐫_ij is given by the nearest image convention using periodic boundary conditions (PBC). Γ >0 is the interaction strength, b is a parameter that can be tuned to shift the spectrum of J, Ñ = ∑_i J̃_ij is Kac's renormalization factor and α is the coefficient that tunes the range of interaction. The alternating sign makes the interactions antiferromagnetic when spins are separated by an odd number of lattice parameters and ferromagnetic when spins are separated by an even number of parameters, see Fig. <ref>. According to the classification of long-range interactions valid for both classical and quantum models, the strong long-range regime corresponds to α < d=1 <cit.>. This regime is characterized by the loss of additivity, although extensivity is preserved by Kac's renormalization factor, Ñ, ensuring a well-defined thermodynamic limit. In the limit α→∞, the Hamiltonian [Eq. (<ref>)] corresponds to the nearest-neighbor Ising chain with transverse and longitudinal fields.
The tunable-range staggered interactions that we consider here are a generalization of the antiferromagnetic nearest-neighbor interactions in unfrustrated lattices. The alternating sign prevents frustration and allows for the formation of two sublattices, as sketched in Fig. <ref>. For vanishing fields, the ground state is the antiferromagnetic configuration, with the spins fully polarized along the x axis in alternating directions for even and odd spins. Staggered tunable-range interactions have been proposed previously for the Heisenberg model <cit.>
§ EXACT SOLUTION IN THE STRONG LONG-RANGE REGIME
§.§ Canonical partition function
Reference presents an exact analytical solution for quantum strong long-range models in the canonical ensemble and the thermodynamic limit (N →∞), extending previous classical results <cit.>. We sketch its application to Hamiltonian (<ref>) here; the details can be found in App. <ref>.
The Hamiltonian is divided into free and interacting parts
H = H_0 - ∑_i,jJ_ijS_i^x S_j^x ,
First, the interaction matrix is diagonalized
J_ij = 1/N∑_k=0^N-1λ_ik D_k λ_jk ,
where D_k are its eigenvalues and λ_ik/√(N) is an orthogonal matrix because J_ij is symmetric. The smallest eigenvalue is set to zero by tuning the on-site interaction parameter b (<ref>). For s≠ 1/2, setting b≠0 introduces new terms in the Hamiltonian, but due to Kac's renormalization factor, Ñ, these are negligible in the thermodynamic limit.
Then, the Hamiltonian can be mapped to a generalized Dicke model <cit.>, replacing the long-range interactions by effective interactions between each particle and a set of M real fields, { u_k }, where M is the number of non-zero eigenvalues of J.
The mapping is only valid if lim_N→∞ M/N = 0, restricting the applicability of the method to models in which the number of non-zero eigenvalues of the interaction matrix is a negligible fraction of the total.
This is the reason why the solution is restricted to the strong long-range regime, where α<d ensures that this condition is met.
Following Wang's procedure, we can obtain the canonical partition function of the corresponding Dicke model <cit.>, thus solving our original model. This yields
Z = exp(-N β f[u̅_k]) ∏_p=0^M-1√(2 D_p) ,
where f[u̅_k] = min_{u_k} f [u_k] and f [u_k] = f(u_0, …, u_M-1). By noticing that in the thermodynamic limit f[u̅_k] is the free energy per particle: f[u̅_k] = lim_N →∞ - (Nβ)^-1ln Z, it becomes apparent that computing the partition function is reduced to a minimization of the variational free energy per particle
f[u_k] = ∑_k=0^M-1u_k^2/D_k + f_ m[u_k]
with respect to the auxiliary fields {u_k}. Here f_ m = F_ m / N, with
F_ m[u_k] = -1/β∑_i=1^N ln[ sinh( (2s+1)βε_i[u_k] )/sinh(βε_i[u_k] )] ,
and
2ε_i [u_k] = √(ω_z^2 + ( ω_x + 2∑_k=0^M-1λ_ik u_k )^2) .
Although computing the canonical partition function has been reduced to solving a minimization problem, finding the global minima of a multivariate function is not a simple task, and success is not guaranteed in most cases. However, for α = 0 (homogeneous all-to-all couplings which are ferromagnetic or antiferromagnetic depending on the distance between the spins) the problem naturally becomes univariate, as there is only one non-zero eigenvalue of the interaction matrix J, so M=1. This is equivalent to setting u_k ≠ 0 = 0, with u_0 corresponding to the largest eigenvalue: D_0 = max{D_k} = Γ and λ_i0 = (-1)^i. Additionally, we find that this solution with u_k ≠ 0 = 0 is the global minimum also for 0 ≠α < 1. This can be shown analytically for ω_x = 0 (See. Appendix <ref>) and has been verified numerically otherwise. With this, computing the canonical partition function has been reduced to a univariate minimization of f(u) ≡ f(u, 0, …, 0) in terms of u ≡ u_0 for all α < 1, which can be tackled analytically or numerically. In the remainder of this section, we do so to obtain the ground state phase diagram of the model and the susceptibilities. This also implies that the equilibrium properties of the model are universal for all α < 1. In fact, f(u) coincides with the free energy obtained with a mean-field approach, proving that mean field is exact also for strong long-range unfrustrated antiferromagnetic models. This is consistent with the fact that unfrustrated antiferromagnetic models can be mapped to ferromagnetic models in a staggered field and a mean-field description of strong long-range ferromagnetic models has been shown to be exact <cit.>.
§.§ Ground State Phase Diagram
In order to study the ground state of the model, we define the variational ground-state energy as the zero-temperature limit of the variational free energy e_0(u) = lim_β→∞ f(u), such that
e_0(u) = u^2/Γ
- s ( ε_+(u) + ε_-(u)) ,
with
2 ε_±(u) = √(ω_z^2 + (ω_x ± 2u)^2) .
It can be shown that the global minimum, u̅, is proportional to the staggered magnetization (See App. <ref>)
m̅_ s = 1/N∑_i=1^N (-1)^i ⟨ S_i^x ⟩ = u̅/Γ .
The staggered magnetization is the order parameter of unfrustrated antiferromagnetic models such as Hamiltonian (<ref>). It is zero in the paramagnetic phase and it measures how close the ground state is to a perfect antiferromagnetic configuration in the antiferromagnetic phase. Figure <ref>(a) shows the phase diagram of the model. The staggered magnetization is obtained from u̅, by minimizing e_0, and plotted as a function of the longitudinal and transverse fields. The model exhibits a quantum phase transition (QPT) between an antiferromagnetic phase and a paramagnetic phase. The nature of the phase transition changes along the critical line from a second- to a first-order QPT. The change can be described analytically by applying the Landau theory of phase transitions to a series expansion of e_0(m_ s), with m_ s = u/Γ the variational staggered magnetization. By studying the change of sign of the expansion coefficients we obtain the tricritical point
ω_x, tp = sΓ8/5√(5) ,
ω_z, tp = sΓ16/5√(5) ,
and the equation for the second-order portion of the critical line
4s^2Γ^2 ω_z^4 = ( ω_z^2 + ω_x^2 )^3 ,
when |ω_z | > 2|ω_x |.
The first-order critical condition cannot be obtained from this analysis because the global minima corresponding to the antiferromagnetic configurations fall outside of the radius of analyticity of the series expansion of e_0(m_ s). Nevertheless, having identified the second-order portion, the rest of the critical line is first order by exclusion. We verify this graphically in Fig. <ref>(c) where we show the landscape of minima of e_0 for different values of ω_x when ω_z / (sΓ) = 0.2. For ω_x / (sΓ)=0.8 there exists a local minimum at m_ s=0, corresponding to the paramagnetic state, and two degenerate global minima at m_ s≈± 1, corresponding to the symmetric antiferromagnetic ground states. For ω_x / (s Γ)=1.2, the minimum at m_ s = 0 has become the global minimum, following a first-order phase transition between the antiferromagnetic and paramagnetic phases. The same behavior is observed for any ω_z < ω_z, tp.
We have thus found the existence of a first-order QPT on a finite portion of the critical line. We emphasize that this is valid for all strong long-range models, α < 1. This constitutes a qualitative difference with respect to the nearest-neighbor limit, α→∞, of the model, in which the QPT is second-order along the full critical line except for the point where ω_z = 0, when the model is classical <cit.>. This is sketched in Fig. <ref>(b) for comparison. It is worth noting that there is some evidence that the phase transition could be mixed order <cit.> around the classical point in the nearest-neighbor limit, but not purely first order like we have just described for the strong long-range regime <cit.>.
For α = 0 it is possible to do a classical analysis of the model that predicts the same phase diagram that we just described, with distinct finite regions of first- and second-order phase transitions. See App. <ref> for more details. This illustrates that a classical analysis can serve as an exploratory tool for spin models with all-to-all interactions. It offers an alternative perspective on the phase transition based on the magnetizations of each sublattice rather than the order parameter (the staggered magnetization in this case). In any case, it is not a replacement for an exact solution due to the fact that its application is limited to α = 0 and that it does not allow for the computation of susceptibilities.
§.§ Analysis of correlations
Introducing a perturbative field to the Hamiltonian (<ref>), H → H -∑_i=1^N h_i S_i^x, we can compute the susceptibilities as
χ_ij = 1/βlim_{h_n}→ 0∂^2 ln Z[h_n]/∂ h_j ∂ h_i .
The susceptibility is proportional to the Kubo correlator <cit.>[Chap. 4], and hence a measure of correlations between spins.
For a translation invariant model the susceptibility can be computed analytically <cit.>[Chap. 6] (see details in App. <ref>). At zero temperature it can be written as
χ_ij = (A^-1)_ij Y_j ,
where the matrix is defined by A_ij = δ_ij - 2Y_iJ_ij and Y_i = s ω_z^2 / (8 ε̅_i) with ε̅_i = ε_(-1)^i(u̅) as defined in Eq. (<ref>).
In Figs. <ref>(a) and (b) we show the behavior of the susceptibility matrix, χ_ij, for α=0 in the two phases of the model. Although Eq. (<ref>) has been obtained in the thermodynamic limit (N →∞), evaluating it requires that we fix a finite value of N. We set N=8 and mask out the diagonal elements to better display the structure of the correlation matrix. For α = 0, in the antiferromagnetic phase the correlations show an alternating pattern with intrasublattice correlations being positive and intersublattice correlations being negative. In addition, the intrasublattice correlations of the sublattice that is aligned with the longitudinal field (even sites) are weaker than the intrasublattice correlations of the sublattice that is aligned against the field (odd sites). This effect accentuates with increasing longitudinal field, to the point that the correlations almost vanish for the sublattice aligned with the field for large longitudinal field. The spins of the sublattice that is aligned with the longitudinal field behave increasingly as paramagnetic free spins despite the model still being in an antiferromagnetic state. In the paramagnetic phase the two sublattices become equally magnetized along the combined external field and the correlation matrix recovers a perfectly alternating pattern that matches the staggered interactions, with intra- and intersublattice correlations being of equal magnitude and opposite sign.
Setting α≠ 0 introduces a spatial dependence in the correlations in both phases. The structure of the correlation matrix remains the same but the elements are modulated by the distance between the corresponding spins.
The correlations exhibit a power-law decay with distance within each of the possible families: intersublattice, even intrasublattice and odd intrasublattice. This is the case regardless of the proximity to the critical point. This behaviour is typical of strong long-range systems and contrasts with weak long-range and short range systems where power-law decay is only present at the critical point <cit.>. Since the model is translation invariant, we can define χ_r, 01≡χ_0 2r+1, χ_r, 00≡χ_0 2r and χ_r, 11≡χ_1 2r+1 for the intersublattice, even intrasublattice and odd intrasublattice correlations; they all follow a power-law decay: χ_r ∝ r^-α_χ with the same exponent, α_χ.
In the paramagnetic phase
χ_r, 00 = χ_r, 11. The rate of decay of correlations depends linearly on the rate of decay of interactions, i.e. α_χ = a α + b. Fig. <ref> (d) shows the slope, a, across the phase diagram. The numerical fit of the relation also shows that b ≈ 0 in all cases, which is consistent with the fact that the model must become distance independent for α = 0. Close to the second-order critical line, α_χ becomes independent of α, with a = 0. This is consistent with the behavior described in Ref. for the strong long-range ferromagnetic Ising model. In contrast, the first-order phase transition is marked by a discontinuity in the slope of the linear dependence between two non-zero values.
The effect of the first- and second-order phase transitions is also apparent in the behavior of single correlation matrix elements across the phase diagram. Figure <ref>(c) shows the value of a correlation matrix element as a function of the longitudinal and transverse fields, for α = 0. They exhibit a divergence at the second-order phase transition and a finite discontinuity at the first-order phase transition. The behavior is analogous for any matrix element and any value of α.
§ NUMERICAL SOLUTION FOR ARBITRARY INTERACTION RANGE
Having established a significant difference between the phase diagram of the strong long-range and nearest-neighbors models, it is natural to wonder how this difference is interpolated for arbitrary values of the range of interactions, α. Since only the strong long-range regime is analytically tractable, in this section we resort to zero-temperature variational quantum Monte Carlo (qMC) <cit.> simulations with a visual transformer (ViT) <cit.> ansatz for finite system sizes to study the model in the full range of interactions. We show the results obtained for spins with s = 1/2 using this architecture, given its recent success in describing spin-1/2 chains with long-range interactions <cit.>. Details on the particular implementation used here can be found in App. <ref>.
Using this technique we obtain the ground state of finite-size chains with periodic boundary conditions along the whole parameter space (ω_x, ω_z, α). In order to study how the phase transition evolves with α, we choose as order parameter the squared staggered magnetization m_s^2 [cf. Eq. (<ref>)]. This choice is justified by the fact that, in finite-size chains, the symmetry of the ground state is not broken and the staggered magnetization is zero throughout the whole parameter space. The squared magnetization, on the contrary, exhibits all the features related to first- and second-order phase transitions and allows us to carry out this numerical characterization.
In Fig. <ref> we observe how ⟨ m_s^2⟩ behaves for different slices of the phase diagram (cf. Fig. <ref>) as a function of the different values of α for a fixed chain size of N = 50. In panels (a), (b) and (c), which correspond to vertical slices (fixed transverse field), we see that for certain values of α the nature of the phase transition evolves from first to second order as the transverse field, ω_z, increases. Panel (a), corresponding to ω_z/(sΓ) = 0.2, merits special attention. It shows that the discontinuity characteristic of first-order phase transitions is clearly present up to α = 2.5 (See App. <ref> for further confirmation). In panels (b) and (c) we see how this transition becomes second order, with a continuous change of the order parameter at the critical point. If instead of comparing a given value of α across panels we focus on any given panel, (a), (b) or (c), we observe that the transition develops a discontinuity as α is lowered. In all cases the transition is clearly discontinuous for α≤ 1, in agreement with the analytical predictions.
It is interesting to note that analytical results of Sec. <ref> predict that for any α≤ 1, the critical behavior should be universal. In contrast, the numerical results do exhibit a dependence of the critical point and the value of the order parameter on α. We attribute this to finite-size effects, which are expected to be particularly important in long-range systems. The relatively small size considered here allows us to witness dependencies on α that are washed away in the thermodynamic limit. Additionally, we observe that the ansatz encounters difficulties in obtaining the correct ground state near the critical point for α≤ 1. One would expect the critical point to recede toward smaller values of ω_x as α increases. However, for α≤ 1 the states corresponding to the ordered and paramagnetic phases are practically degenerate in energy and the ViT fails to systematically converge to the correct ground state, so this trend can no longer be numerically confirmed.
In Fig. <ref>(d) we show a horizontal slice of the phase diagram (fixed longitudinal field). The phase transition is second order for all values of α, as expected from the analytical results of Sec. <ref>. Following Figs. <ref>(a) and (b) we expected a decrease of the second-order critical point as α increases. This is confirmed by the numerical results.
The critical point tends to the predicted values of ω_z/(sΓ) = 2 for α≤ 1 (strong long-range regime) and ω_z/(sΓ) = 0.5 in the limit of α→∞ (nearest-neighbors regime). We observe that from α≳ 7 the curves collapse and thus it can be considered as the numerical limit from which the model operates in the nearest-neighbors regime. On the other hand, the numerical instabilities now appear at a point where the model is classical (no transverse field).
To better certify the order of the different phase transitions, beyond a visual analysis of the discontinuities (or lack thereof) of the order parameter, we perform a finite-size scaling analysis. In Fig. <ref> we show this analysis for α = 2 (See App. <ref> for other values of α). As it can be seen from panel (a), all the curves corresponding to N = 50, 70, 100 coincide, showing an independence with size characteristic of first-order transitions, in addition to the discontinuity in the order parameter. In contrast, in panel (c), the curves exhibit the typical size dependence of second-order phase transitions. The continuous change of the order parameter becomes increasingly non-analytical as the size increases. The crossover point marks the second-order critical point expected in the thermodynamic limit.
Figure <ref> summarizes the numerical results discussed in this section, along with additional results reserved for App. <ref>. We sketch the critical line for different values of α, highlighting the regions where it is first and second order. We have established that the first-order phase transition at finite transverse field is present for α≤ 2.5. The portion of the critical line that is first order decreases progressively with increasing α until for α≳ 3 the full critical line is second order. This is remarkable because it indicates that the first-order phase transition is present not only in the strong long-range regime but also beyond the mean-field threshold and presumably (see the next paragraph) in the full weak long-range regime. Although the precise boundaries, α_ MF and α^*, between the regime in which the model exhibits mean-field critical exponents and the weak long-range regime and between the weak long-range and short-range regimes, have not been established for the antiferromagnetic model under consideration, it is known that for the analogous ferromagnetic model they lie at α_ MF = 5/3 ≈ 1.66 and α^* = 3 <cit.>.
Additionally, we have witnessed the shrinking of the antiferromagnetic phase as the model approaches the short range regime, driven by a decrease of the critical transverse field at zero longitudinal field from ω_z/(sΓ) = 2 for α > 1 to ω_z/(sΓ) = 0.5 for α≳ 7.
It is worth noting that the first-order phase transition is very sensitive to finite-size effects. This is evident from the curve corresponding to α = 0.5. We know from our analytical results that the tricritical point where the transition changes from first to second order occurs at ω_z / (sΓ) = 16/(5 √(5)) ≈ 1.43. However, from our numerical results with N=100 we are only able to certify a first-order phase transition up to ω_z / (sΓ) = 0.8. This leads us to believe that our numerical results are also underestimating the region where the critical line is first order for all the other values of α, and therefore also for how low a value of α the first-order phase transition survives.
§ DISCUSSION
In this paper we have generalized the Ising chain in transverse field to feature tunable-range antiferromagnetic interactions. We have solved the model analytically in the strong long-range regime, showing that it presents a tricritical point in the phase diagram where the critical line changes from first to second order. This is in contrast with the nearest-neighbours limit, where the critical line was known to be always second order but for the point of vanishing transverse field, where the model is classical. To understand the transition from one limit to the other, we have studied the model numerically in the full range of interactions. We have found that the first-order phase transition is present beyond the strong long-range regime, apparently in the full weak long-range regime, only disappearing when the model enters the short-range regime beyond α≈ 3. This confirms that the range of interactions can influence the nature of phase transitions in a model.
The fact that similar behaviour has been observed in a model with only antiferromagnetic nearest-neighbor and ferromagnetic next-nearest-neighbor interactions <cit.> suggests that the key ingredient that stabilizes a first-order phase transition is the presence of intrasublattice ferromagnetic interactions, in contrast with models with only intersublattice antiferromagnetic interactions. To verify this hypothesis we have also considered Hamiltonian (<ref>) but without intrasublattice ferromagnetic interactions, i.e. with J_ij = 0 for i+j even. This model is not tractable with the analytical method used in Sec. <ref>, so an exact solution in the strong long-range regime is not possible. Nevertheless, a classical analysis of the α = 0 limit, analogous to the one described in App. <ref>, shows that the phase transition is always second order, except for the point of vanishing transverse field, ω_z = 0, where the model is classical. Thus, we can conclude that intrasublattice ferromagnetic interactions (on top of intersublattice antiferromagnetic interactions) are the key ingredient to generate a first-order phase transition. In that sense, in the tunable-range unfrustrated antiferromagnetic Ising model of Hamiltonian (<ref>), α acts as a tuning knob to change the ratio between inter- and intrasublattice interactions. For low enough α, intrasublattice ferromagnetic interactions become large enough to stabilize a first-order phase transition. The same mechanism might explain the first-order phase transitions reported in other quantum long-range models with staggered interactions <cit.>.
Finally, the appearance of a first-order phase transition at zero temperature suggests that the strong long-range unfrustrated antiferromagnetic Ising chain may exhibit ensemble inequivalence at finite temperature. Ensemble inequivalence has been reported to appear in the strong long-range regime in both classical <cit.> and quantum models featuring a first-order phase transition <cit.>.
§ ACKNOWLEDGEMENTS
The authors acknowledge funding from the grant
TED2021-131447B-C21 funded by MCIN/AEI/10.13039/501100011033 and the EU
‘NextGenerationEU’/PRTR, grant CEX2023-001286-S financed by MICIU/AEI /10.13039/501100011033, the Gobierno de Aragón (Grant E09-17R Q-MAD), Quantum Spain and the CSIC Quantum
Technologies Platform PTI-001. This research project was made possible through the access granted by the Galician Supercomputing Center (CESGA) to its supercomputing infrastructure. The supercomputer FinisTerrae III and its permanent data storage system have been funded by the Spanish Ministry of Science and Innovation, the Galician Government and the European Regional Development Fund (ERDF). J. R-R acknowledges support from the Ministry of Universities of the Spanish Government through the grant FPU2020-07231. S. R-J. acknowledges financial support from Gobierno de Aragón through a doctoral fellowship.
§ SOLVING THE STAGGERED ANTIFERROMAGNETIC ISING CHAIN
After diagonalizing the interaction matrix, J, as in Eq. (<ref>), Hamiltonian (<ref>) can be mapped onto a generalized Dicke model <cit.> given by
H_D = H_0 + ∑_k=0^M-11/D_k a_k^† a_k - ∑_k=0^M-1∑_i=1^N (a_k + a_k^†) λ_ik/√(N) S_i^x ,
where H_0 = -ω_z∑_i^N S_i^z - ω_x ∑_i^N S_i^x and a_k and a_k^'^†, with [a_k,a_k^'^†] = δ_k,k^', are the creation and annihilation operators of M bosonic modes.
The mapping is valid if lim_N→∞ M/N=0, where M is the number of non-zero eigenvalues of the interaction matrix, J, from the Hamiltonian (<ref>) (the smallest eigenvalue is always set to zero by tuning the parameter b from Eq. (<ref>)).
We have thus replaced the interaction between the spins by an interaction between each spin and a set of M bosonic modes.
Following Wang's procedure <cit.>, the canonical partition function of that generalized Dicke model can be obtained by first computing the partial trace over the matter degrees of freedom.
Then, replacing the photonic degrees of freedom by a collection of M real Gaussian integrals, we obtain
Z = ∫∏_k=0^M-1√(ND_k/π)du_k exp{-Nβ f[u_k]} ,
where the variational free energy per particle, f[u_k], is given by Eq. (<ref>).
The variational free energy has two contributions: one accounting only for the real fields associated with the bosonic modes, {u_k}, and the function F_ m[u_k] [Eq. (<ref>)] which accounts for the spins and their interaction with the bosonic modes. It is computed as
F_ m[u_k] = -1/βln(Z_ m) ,
with
Z_ m[u_k] = Tr_m[ exp{ -β( H_0 - ∑_i=1^N ∑_k=0^M-1 2λ_ik u_k S_i^x ) }] .
Z_ m[u_k] represents the partial trace of the generalized Dicke model over its matter degrees of freedom. Using the fact that it factorizes over the N different spins we obtain
Z_ m[u_k] = ∏_i=1^N ( ∑_m=-s^+sexp{-2mβεu_k})
= ∏_i=1^N sinh((2s+1)βε_i)/sinh(βε_i) ,
where ε_i[u_k] is given by Eq.(<ref>).
Hence, using Wang's procedure the partition function of the original model can be expressed as the integral over a set of M real fields, Eq. (<ref>).
The explicit linear dependence on N of the exponent allows using the saddle-point method, which is exact in the limit N→∞.
Hence, the integral can be replaced by the value of its integrand at its maximum, as done in Eq. (<ref>), which corresponds to finding the global minimum of the variational free energy per particle.
§ FINDING THE GLOBAL MINIMUM OF THE VARIATIONAL FREE ENERGY PER PARTICLE IN ABSENCE OF LONGITUDINAL FIELD
We want to find the global minimum of the variational free energy per particle, f[ u_k ] from Eq. (<ref>), for all α < 1 and ω_x = 0.
Defining for each site on the lattice the variable
μ_i = ∑_k=0^M-1λ_ik u_k ,
allows writing Eq, (<ref>) as
2ε_i[u_k] = √(ω_z^2 + 4 μ_i^2) .
The relation from Eq. (<ref>) can be inverted to obtain u_k = ∑_i=1^N μ_i λ_ik/N.
The vanishing gradient condition of f[u_k] in its local minima, {u̅_k }, can be written as
u̅_k/D_k = s/N∑_i=1^N λ_ik∑_l=0^M-1λ_ilu̅_l/ε̅_i B_s(2sβε̅_i ) ,
with B_s(x) the Brillouin function (B_1/2(x) = tanh(x)) <cit.>[Chap. 11] and ε̅_i = ε_i[u̅_k ].
A linear combination of this set of M equations allows us to find a relation satisfied in the stationary points of the variational free energy,
μ̅_j = s ∑_i=1^N J_ijμ̅_i/ε̅_i B_s (2sβε̅_i) .
This relation allows us to particularize the expression of f[u̅_k], i.e. the expression of the variational free energy per particle in its stationary points, as
f[u̅_k]
= 1/N∑_j=1^N [ sβμ̅_j^2/ε̅_j B_s(2sβε̅_j) - ln( sinh((2s+1)βε̅_j)/sinh(βε̅_j)) ] .
Hence, in the stationary points the variational free energy can be written as a sum of a certain function in each site of the lattice
f[u̅_k] = 1/N∑_i=1^N ξ(μ̅_i^2) ,
as ε̅_i does not depend on μ̅_i itself but on μ̅_i^2 (see Eq. (<ref>)).
Equation (<ref>) means that among all the possible local minima, the global one must satisfy that |μ̅_i | = μ̅ for each lattice site, in order to minimize each of the N terms ξ(μ̅_i^2).
Due to the structure of the matrix λ_ik, the only possible configuration that meets |μ̅_i | = μ̅ is
μ̅_i = (-1)^i μ̅ .
It can be translated into a condition for the fields {u_k} inverting the relation from Eq. (<ref>). This condition must be satisfied in the global minimum of the variational free energy per particle,
u̅_k≠0=μ̅ δ_k0 .
Hence, it analytically shows that the multivariate minimization problem can be turned into a single-variable one when ω_x = 0 for any value of α.
Then, when α=0 the problem is naturally single-variable as M=1 for any value of ω_x.
For any other α<1 it has been numerically checked that the problem remains single-variable, as was analytically proved in the strong long-range ferromagnetic model <cit.>.
§ ANALYTICAL CALCULATION OF THE MAGNETIZATION AND THE SUSCEPTIBILITY
As explained in the main text, introducing a perturbative logitudinal field in Hamiltonian (<ref>) allows us to obtain the magnetization of the model and the susceptibilities between spins. In this appendix we will present the analytical exact expression for both magnitudes in the strong long-range regime.
By introducing a perturbative magnetic field, Hamiltonian (<ref>) is replaced by H → H - ∑_i=1^N h_s S_i^x, which leads to a partition function which depends on the perturbative field: Z[h_n] = Z(h_1,…,h_N), defined by Eq. (<ref>) with the replacement ω_x →ω_x + h_i in Eq. (<ref>).
To compute the perturbative-field-dependent partition function, the variational free energy will depend on {h_n} through Eq. (<ref>), modifying the minimization problem we need to solve.
First, we can define field-dependent magnetization and susceptibilities as
⟨ S_i^x ⟩ [h_n] = 1/β∂ln Z[h_n]/∂ h_i ,
χ_ij[h_n] = 1/β∂^2 ln Z[h_n]/∂ h_j ∂ h_i ,
which allow us to compute the magnetization and the susceptibilities from Hamiltonian (<ref>) as ⟨ S_i^x ⟩ = lim_{h_n}→ 0⟨ S_i^x ⟩ [h_n] and χ_ij = lim_{h_n}→ 0χ_ij [h_n].
For a translation invariant model, as considered, we will be able to compute both magnitudes analytically <cit.>[Chap. 6].
A direct computation of the derivative (<ref>) allows us to write
⟨ S_i^x ⟩[ h_n ] = s B_s(2sβε̅_i ) ω_x + h_i + 2 ∑_k=0^M-1u_k λ_ik/2 ε̅_i .
Note that here the solution to the minimization problem will depend on the perturbative fields, i.e. u̅_k ≡u̅_k[h_n].
Using Eq. (<ref>) the vanishing gradient condition of the variational free energy per particle in its global minimum can be writen as
1/D_ku̅_k = 1/N∑_i=1^N λ_ik⟨ S_i^x ⟩ [h_n] .
In the antiferromagnetic model λ_i0=(-1)^i and D_0=Γ, which means that u̅_0/Γ = m̅_ s, proving Eq. (<ref>) in the limit {h_n}→ 0 as a consequence of the null gradient condition, valid for α<1.
Computing the derivative of Eq. (<ref>) leads to
1/D_k∂u_k/∂ h_j = 1/N∑_i=1^N λ_ikχ_ij[h_n] ,
where susceptibilities are defined by Eq. (<ref>). Merging both expressions, we can obtain a relation between susceptibilities, which taking the limit {h_n}→0 can be written as
χ_ij = Y_i ( δ_ij + 2 ∑_r=1^N J_irχ_rj) ,
with
Y_i = s/2ε̅_i^2[ ε̅_i B_s(2sβε̅_i) + (ω_x + 2∑_l=0^M-1λ_ilu̅_l )^2 (sβ/2 B_s^' (2sβε̅_i) - 1/4ε̅_iB_s(2sβε̅_i)) ] .
We can manipulate Eq. (<ref>) to write a matrix equation,
∑_r=1^N ( δ_ir - 2Y_iJ_ir) χ_rj = Y_i δ_ij ,
which allow us to obtain the susceptibilities by inverting the matrix A_ij = δ_ij - 2Y_iJ_ij, writing
χ_ij = A^-1_ij Y_j ,
as particularized in Eq. (<ref>) for β→∞.
§ CLASSICAL ANALYSIS OF THE PHASE TRANSITION FOR Α = 0
For α=0 the interaction is homogeneous and alternating in sign and we can express Hamiltonian (<ref>) in terms of total spin operators for each of the sublattices, J_Λ^γ = ∑_i ∈ℒ(Λ) S_i^γ, with Λ = A, B the sublattice index and ℒ(Λ) the corresponding sublattice,
H = -ω_z (J_A^z + J_B^z) -ω_x (J_A^x + J_B^x ) - Γ/N(J_A^x - J_B^x)^2 .
This Hamiltonian commutes with the total spin operators, J_Λ^2 = (J_Λ^x)^2 + (J_Λ^y)^2 + (J_Λ^z)^2. Consequently, it connects only states with the same total spins J_A^2 and J_B^2. Using exact diagonalization for finite sizes, we show in Appendix <ref> that the ground state always lies on the subspace of maximum total spins. This implies that the ground state properties of the model are perfectly captured by a model of two spins of sizes, j_A = j_B = sN/2, interacting according to Eq. (<ref>). In the thermodynamic limit, N →∞, the description is further simplified by the fact that these spins can be described exactly in their classical limit <cit.>. We can therefore study the energy resulting from replacing the spins with classical magnetizations J_Λ^γ→ s N/2 m_Λ^γ. The magnetizations are unit vectors that we can assume to lie on the x, z plane m_Λ = (cosθ_Λ, 0, sinθ_Λ). With this, the energy per site reads
e(θ_A, θ_B)= -sω_z/2(sinθ_A + sinθ_B)
-sω_x/2(cosθ_A + cosθ_B)
- s^2Γ/4(cosθ_A - cosθ_B)^2 .
The ground state staggered magnetization is given by m_s = s (cosθ̅_A - cosθ̅_B) / 2, with θ̅_A and θ̅_B the minimizers of E(θ_A, θ_B).
In Fig. <ref> we show the energy landscape as a function of θ_A and θ_B for different values of ω_x and ω_z. For ω_x / (s Γ) ≪ 1 and ω_z / (s Γ) ≪ 2 there are two global minima at (θ̅_A, θ̅_B) = (0, π) and (θ̅_A, θ̅_B) = (π, 0) corresponding to the two symmetric antiferromagnetic configurations with m_s = ± s. If ω_x is kept fixed and ω_z is increased until ω_z / (s Γ) > 2, the two global minima merge into a single one at (θ̅_A, θ̅_B) = (π/2, π/2) corresponding to a paramagnetic configuration with m_s = 0. The coalescence of minima is indicative of a second-order phase transition. Contrarily, if ω_z is kept fixed and ω_x is increased a new local minimum progressively develops at (θ_A, θ_B) = (0, 0) until at ω_x / (s Γ) ≥ 1 it becomes the global minimum, corresponding to a paramagnetic configuration with m_s = 0. The former global minima become local minima. The formation of a new local minimum that grows until becoming the global minimum is indicative of a first-order phase transition. In summary, Fig. <ref> shows that the energy landscape encoded in E(θ_A, θ_B) displays the typical behavior associated with first- and second-order phase transitions.
We find that the phase diagram predicted by this simple classical model coincides with the phase diagram computed exactly and shown in Fig. <ref>. It is therefore an equivalent formulation of the minimization problem presented in Eq. (<ref>), in terms of two variables θ_A, θ_B instead of the single minimization variable u. In this sense, we can understand that Eq. (<ref>) provides the most concise formulation of the minimization problem, as a univariate function of the order parameter, that lends itself particularly well to the analysis in terms of the Landau theory of phase transitions that we performed in Sec. <ref>.
§ VERIFICATION USING EXACT DIAGONALIZATION
For α = 0 there are two equivalent descriptions of the model in terms of either individual spins at each site, as in Hamiltonian (<ref>), or in terms of total spin operators of each sublattice, as in Hamiltonian (<ref>). As explained in App. <ref>, the total spin of each sublattice, J_Λ^2, is a conserved quantity and the Hamiltonian is block diagonal with blocks of fixed total spins. Here we diagonalize the subspace of maximum total spins, j_A = j_B = sN/2 and compare its low-energy spectrum to the low-energy spectrum of the full Hamiltonian (<ref>). In Fig. <ref> we show that the ground-state energies of the two Hamiltonians coincide, with differences appearing in the multiplicity of excited states. We show this for N = 10 and s = 1/2 fixing ω_x / (sΓ) = 0.2 and varying ω_z / (s Γ) and vice versa, but the same behavior is observed for any accessible system size, N, and spin size, s, and for any values of ω_x/ (s Γ) and ω_z/ (s Γ) of the phase diagram. This implies that the low energy physics are well captured by the subspace of maximum total spins, j_A = j_B = sN/2. This fact is exploited in App. <ref> to provide a classical description of the ground state phase diagram.
§ NUMERICAL PARAMETERS
The visual transformer (ViT), the variational ansatz used to obtain the numerical results discussed in the main text, is a type of neural network composed of several blocks. A detailed block diagram with all the operations involved, as well as the hyperparameters that define the architecture, can be found in Ref. . For this work the only modification that has been made is to add more attention blocks, the so-called core blocks in said reference, which have been fundamental to capture all the long-range correlations presented by the model. Specifically, we have used three of those blocks. The new training procedure is described in the following:
The optimization process is carried out by Stochastic Gradient Descent (SGD) with custom schedules for the learning rate, λ, combined with the Stochastic Reconfiguration (SR) method <cit.>, characterized by a stabilizing parameter called diagonal shift that we denote as Δ_sr. The training protocol for λ consists of a linear warm-up followed by an exponential decay. This protocol is defined by an initial value of the learning rate, λ_0, a maximum value, λ_max that it attains after n_warm iterations corresponding to the linear warm-up and a ratio γ for the exponential decay. To obtain the results presented throughout this manuscript, a maximum of 500 iterations were used for training, along with the following parameters: λ_0 = 0.1, λ_max = 6.0, n_warm = 150, γ = 0.999. The stabilizer parameter present in the SR method was kept constant with the value of Δ_sr = e-4.
§ ADDITIONAL NUMERICAL RESULTS
Figure <ref> includes further numerical results that reinforce the points discussed in section <ref> of the main text. There we show different vertical slices of the phase diagram for different values of α. These slices allow us to monitor how the tricritical point evolves along the parameter space. These results show that the first-order transition extends to non-negligible transverse fields such as ω_z/(sΓ) = 0.4 beyond the strong long-range regime, for α = 1.5. The first-order phase transition is present up to α = 2.5 and a non-zero transverse field of ω_z/(sΓ) = 0.2. In the cases of α = 0.5 and α = 1.0 we see how the numerical instabilities are alleviated by increasing the transverse field strength.
|
http://arxiv.org/abs/2409.02985v1 | 20240904180001 | Nonlinear anomalous Hall effect in three-dimensional chiral fermions | [
"Azaz Ahmad",
"Gautham Varma K.",
"Gargee Sharma"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
School of Physical Sciences, Indian Institute of Technology Mandi, Mandi 175005, India.
School of Physical Sciences, Indian Institute of Technology Mandi, Mandi 175005, India.
School of Physical Sciences, Indian Institute of Technology Mandi, Mandi 175005, India.
§ ABSTRACT
Chiral fermionic quasiparticles emerge in certain quantum condensed matter systems such as Weyl semimetals, topological insulators, and spin-orbit coupled noncentrosymmetric metals. Here, a comprehensive theory of the chiral anomaly-induced nonlinear anomalous Hall effect (CNLAHE) is developed for three-dimensional chiral quasiparticles, advancing previous models by rigorously including momentum-dependent chirality-preserving and chirality-breaking scattering processes and global charge conservation. Focusing on two specific systems–Weyl semimetals (WSMs) and spin-orbit coupled non-centrosymmetric metals (SOC-NCMs), we uncover that the nonlinear anomalous Hall conductivity in WSMs shows nonmonotonic behavior with the Weyl cone tilt and experiences a `strong-sign-reversal' with increasing internode scattering, diverging from earlier predictions. For SOC-NCMs, where nonlinear anomalous Hall conductivity has been less explored, we reveal that unlike WSM, the orbital magnetic moment alone can drive a large CNLAHE with distinctive features: the CNLAH conductivity remains consistently negative regardless of interband scattering intensity and exhibits a quadratic dependence on the magnetic field, contrasting the linear dependence in WSMs. Furthermore, we discover that in SOC-NCMs the Zeeman coupling of the magnetic field acts like an effective tilt term which can further enhance the CNLAH current.
These findings offer fresh insights into the nonlinear transport dynamics of chiral quasiparticles and can be verified in upcoming experiments on such materials.
Nonlinear anomalous Hall effect in three-dimensional chiral fermions
Gargee Sharma
Received XXXX; Accepted YYYY
====================================================================
§ INTRODUCTION
The concept of chiral particles originates from high-energy physics <cit.>.
While electrons, protons, and neutrons have chiral aspects in their interactions and internal structure, they are not fundamentally chiral particles because of their finite mass. On the other hand, the existence of massless chiral fermions is now well-established in condensed matter systems <cit.>. They emerge in certain materials as quasiparticles exhibiting behavior analogous to the theorized chiral fermions in particle physics. Two prominent examples of these materials are topological insulators (TIs) <cit.> and Weyl semimetals (WSMs) <cit.>. TIs have gapped bulk states, while their boundary states are massless and chiral.
In contrast, WSMs have gapless bulk chiral states (Weyl fermions) that are topologically protected by a non-vanishing Chern number, which is equivalent to the chirality quantum number.
Nielsen & Ninomiya, who first studied the regularization of Weyl fermions on a lattice, showed that they must occur in pairs of opposite chiralities <cit.>, thus leading to the conservation of both chiral charge and global charge in absence of any gauge fields. Probing the chirality of the emergent Weyl fermions in WSMs has been of utmost theoretical and experimental interest since the past decade <cit.>.
In the presence of external electromagnetic fields, the chiral charge is not conserved, which is the celebrated chiral anomaly (CA) or the Adler-Bell-Jackiw (ABJ) anomaly <cit.> of Weyl fermions. The non-conservation of chiral charges leads to an anomaly-induced current that may be verified in WSMs by measuring its transport and optical properties <cit.>. Interestingly, CA has been proposed to occur in systems that are not WSMs <cit.>. The quasiparticles, in this case, are not necessarily massless but have a notion of chirality due to their underlying spinor structure.
This has led to the generalization that CA may manifest in any system with nonzero Berry flux through the Fermi surface, irrespective of the energy dispersion, number of Weyl nodes, or the underlying symmetries of the Hamiltonian <cit.>. A specific example is that of spin-orbit-coupled (SOC) non-centrosymmetric metals (NCMs) that host nonrelativistic fermions but have multiple Fermi surfaces with fluxes of opposite Berry curvature <cit.>. A few recent studies have investigated CA-induced electronic and thermal transport in SOC-NCMs <cit.>.
While some band properties in SOC-NCMs may be similar to those of WSMs, their transport responses are strikingly different <cit.>.
The transport of chiral quasiparticles in condensed matter systems is affected by two key scattering processes: (i) chirality-breaking, and (ii) chirality-preserving scattering. In the context of WSMs, these are also known as (i) internode scattering (chirality-breaking), and (ii) intranode scattering (chirality-preserving), respectively, as Weyl fermions of opposite chiralities live at different Weyl nodes (or valley points) in the momentum space. Since SOC-NCMs have just one relevant nodal point, but with multiple Fermi surfaces with opposing fluxes of the Berry curvature, the two types of scattering mechanisms refer to (i) interband (chirality-breaking) and (ii) intraband (chirality-preserving) scattering, respectively.
Notably, the chiral anomaly manifests by the first process–the chirality-breaking scattering, which is governed by corresponding scattering timescale τ_inter. The second process that preserves chirality, which is not directly related to the anomaly, is governed by a timescale τ_intra. Nevertheles, a series of earlier works <cit.> have primarily focused on the role of τ_intra while investigating CA-induced transport, while neglecting τ_inter. This is sometimes justified by stating that the chirality preserving scattering often dominates, i.e., 1/τ_inter≪ 1/τ_intra.
But even in the approximation that τ_inter≫τ_intra, the analysis in most of the previous studies is flawed for two main reasons: (i) they neglect global charge conservation, and (ii) they assume a momentum-independent scattering time, which was recently shown to be inaccurate for chiral quasiparticles <cit.>.
Recent studies have refined the understanding and analysis of transport in chiral Weyl fermions by moving beyond previous assumptions, leading to some striking and significant predictions in linear magnetotransport <cit.>.
Apart from inducing currents proportional to the applied field (linear response), chirality-violating processes can induce nonlinear effects as well, such as the nonlinear Hall effect <cit.>. In an inversion symmetry-broken Weyl semimetal with tilted Weyl cones, a nonlinear Hall effect can be induced by the chiral anomaly, known as the chiral anomaly-induced nonlinear anomalous Hall effect (CNLAHE), which is the combined effect of the Berry curvature-induced anomalous velocity 𝐯_anom=(e/ħ)𝐄×Ω_𝐤 <cit.> and the chiral anomaly. The effect is nonzero when the Fermi surface is asymmetric and the Hamiltonian exhibits broken inversion symmetry. In WSMs, the tilt of the Weyl cone creates an asymmetric Fermi surface around the projection of the Weyl node on the Fermi surface. It is important to note that the chiral anomaly-induced nonlinear Hall effect (CNLAHE) is distinct from the CNLHE caused by the Berry curvature dipole (BCD) <cit.>, as the latter can occur even without an external magnetic field.
Previous works on CNLAHE <cit.> assume that the internode scattering rate is much lower than the intranode scattering rate, or in other words τ_inter≫τ_intra, thereby neglecting the role of internode scattering. Furthermore, the analysis suffers from the aforementioned shortcomings: (i) neglecting global charge conservation, and (ii) assumption of a momentum-independent scattering time, both of which breakdown for chiral quasiparticles of multiple flavors.
In this work, we present a complete theory of the nonlinear anomalous Hall effect, correctly including the effects of chirality-breaking and chirality-preserving scattering, retaining their full momentum dependence, and incorporating global charge conservation. Our theory is generic and works for any system with chiral quasiparticles of multiple flavors, however, we focus on two particular systems of experimental interest: (i) Weyl semimetal, and (ii) spin-orbit coupled noncentrosymmetric metal. We find that in Weyl semimetals the nonlinear anomalous Hall conductivity is a nonmonotonic function of the Weyl cone tilt, which is in contrast to earlier studies <cit.>. Furthermore, we also find that sufficiently strong internode scattering (not considered in earlier works) flips the sign of conductivity leading to `strong-sign-reversal'. Additionally, we also examine the effect of strain (also not considered in prior works), and find that strain-induced chiral gauge field also gives rise to nonlinear anomalous Hall effect but without any `strong-sign-reversal'.
The nonlinear anomalous Hall conductivity has not been earlier analyzed in spin-orbit coupled non-centrosymmetric metals, which forms another important focus of this work. While, the chiralities of quasiparticles in WSMs and SOC-NCMs is exactly the same, their nonlinear current response is remarkably distinct from each other.
Interestingly, we discover that unlike WSMs, the anomalous orbital magnetic moment in SOC-NCMs can drive a large CNLAH current when the electric and magnetic fields are noncollinear. We further find that including the effect of Zeeman coupling of the magnetic field acts like an effective tilt term which tilts the Fermi surfaces of both the chiral flavors in the same direction, thereby further enhancing the CNLAH current. We highlight significant differences between the nonlinear conductivity obtained for WSMs and SOC-NCMs. First, CNLAHE can be driven in SOC-NCMs by anomalous orbital magnetic moment, unlike WSMs where the cones must be necessarily tilted. Second, CNLAH conductivity in WSMs flips its sign with sufficiently strong internode scattering, unlike SOC-NCMs where CNLAH conductivity remains always negative even for sufficiently high interband scattering (although both these processes break the quasiparticle chirality). Third, CNLAH conductivity is linear in B for WSMs but is quadratic in B for SOC-NCMs. Lastly, the angular dependence of CNLAH conductivity is strikingly different from that of WSMs.
In Section II, we present the Boltzmann theory where an analytical ansatz to the electron distribution function is derived. Secion III and IV discuss the CNLAH conductivity is WSMs and SOC-NMCs, respectively. We conclude in Section V.
§ MAXWELL-BOLTZMANN TRANSPORT THEORY
We use the semiclassical Maxwell-Boltzmann formalism to describe the dynamics of three-dimensional chiral fermions in the presence of external electric and magnetic fields. The non-equilibrium distribution function f^χ_𝐤 describing fermions with chirality χ, evolves as:
∂ f^χ_𝐤∂ t+ 𝐫^χ_𝐤·∇_𝐫f^χ_𝐤+𝐤^χ·∇_𝐤f^χ_𝐤=I_coll[f^χ_𝐤],
with f^χ_𝐤 = f_0 + g^χ_𝐤 + h^χ_𝐤, where f_0 is standard Fermi-Dirac distribution, and g^χ_𝐤 and h^χ_𝐤 are deviations up to the first and second order in electric field (E), respectively. Without loss of generality, we fix the electric field along the z-direction and express the deviations as:
g^χ_𝐤 = -e(∂ f_0∂ϵ)Λ^χ_𝐤 E
h^χ_𝐤 = -e(∂ g^χ_𝐤∂ϵ)Γ^χ_𝐤 E
=e^2((∂^2 f_0/∂ϵ^2)Λ^χ_𝐤+(∂Λ^χ_𝐤/∂ϵ)(∂ f_0/∂ϵ)) Γ^χ_𝐤 E^2
,
where Λ^χ_𝐤 and Γ^χ_𝐤 are the unknown functions to be evaluated, and all their derivatives with respect to energy are taken at the Fermi surface in the limit T→ 0.
The right-hand side in Eq. <ref>, i.e., collision integral term incorporates both chirality-breaking and chirality-preserving scattering and is expressed as:
I_coll[f^χ_𝐤]=∑_χ' 𝐤'𝐖^χχ'_𝐤 𝐤'(f^χ'_𝐤'-f^χ_𝐤),
where, the scattering rate 𝐖^χχ'_𝐤 𝐤' calculated using Fermi's golden rule:
𝐖^χχ'_𝐤 𝐤' = 2π n/𝒱|⟨u^χ'(𝐤')|U^χχ'_𝐤 𝐤'|u^χ(𝐤)⟩|^2×δ(ϵ^χ'(𝐤')-ϵ_F).
In the above expression n is impurity concentration, 𝒱 is system volume, |u^χ(𝐤)⟩ is chiral spinor, U^χχ'_𝐤 𝐤' is scattering potential profile, and ϵ_F is the Fermi energy. We choose U^χχ'_𝐤 𝐤'= I_2×2U^χχ' for elastic impurities, where, U^χχ' distinguishes chirality-breaking and chirality-preserving scatterings, which can be controlled in our formalism. We denote the relative magnitude of chirality-breaking to chirality-preserving scattering by the ratio α = U^χχ'≠χ/U^χχ in our formalism. In the context of WSMs, α denotes the ratio of internode to intranode scattering strength, while for SOC-NCMs it denotes the ratio of interband to intraband scattering strength.
In the presence of electric (𝐄) and magnetic (𝐁) fields, semiclassical dynamics of the quasiparticles are modified and governed by the following equation <cit.>:
𝐫̇^χ = 𝒟^χ_𝐤( e/ħ(𝐄×𝐯^χ) + e/ħ(𝐯^χ·Ω^χ) 𝐁 + 𝐯_𝐤^χ)
𝐩̇^χ = -e 𝒟^χ_𝐤( 𝐄 + 𝐯_𝐤^χ×𝐁 + e/ħ (𝐄·𝐁) Ω^χ),
where, 𝐯_𝐤^χ = (ħ^-1)∂ϵ^χ(𝐤)/∂𝐤 is band velocity, Ω^χ_𝐤 = -χ𝐤 /2k^3 is the Berry curvature, and 𝒟^χ_𝐤 = (1+e𝐁·Ω^χ_𝐤/ħ)^-1 is factor by which density of states is modified due to presence of the Berry curvature. Self-rotation of the Bloch wave packet also gives rise to an orbital magnetic moment (OMM) 𝐦^χ_𝐤 <cit.>. We rotate the magnetic field along the xz-plane: 𝐁 = B (cosγ,0,sinγ), i.e., for γ=π/2 both the fields are parallel to each other.
The focus of this work is to investigate the effect of chiral-anomaly term, and we therefore neglect the Lorentz force term. This also allows us to make analytical progress. We point out that this approximation becomes exact in the limit γ→π/2. Even if γ<π/2, the Lorentz force magnitude is comparatively smaller <cit.>.
Keeping terms up to the second order in the electric field, the Boltzmann transport equation reduces to the following set of equations:
𝒟^χ_𝐤[v^χ,z_𝐤+eBsinγ/ħ(𝐯^χ_𝐤·Ω^χ_k)]
= ∑_χ' 𝐤'𝐖^χχ'_𝐤 𝐤'(Λ^χ'_𝐤'-Λ^χ_𝐤).
𝒟^χ_𝐤∂/∂ϵ^χ_𝐤(∂ f_0/∂ϵ^χ_𝐤 Λ^χ_𝐤)
[v^χ,z_𝐤+eBsinγ/ħ(𝐯^χ_𝐤·Ω^χ_k)]=
∑_χ' 𝐤'𝐖^χχ'_𝐤 𝐤'(Γ^χ'_𝐤'∂/∂ϵ^χ'_𝐤'(∂ f_0/∂ϵ^χ'_𝐤'Λ^χ'_𝐤')-Γ^χ_𝐤∂/∂ϵ^χ_𝐤(∂ f_0/∂ϵ^χ_𝐤Λ^χ_𝐤))
Eq. <ref> can be solved for Λ^χ, which can then be used to solve for Γ^χ in Eq. <ref>, and then the distribution function is evaluated using Eq. <ref>. Once the distribution function is evaluated, the current density can be evaluated as:
𝐉=-e∑_χ,𝐤 f^χ_𝐤𝐫̇^χ.
We primarily focus on the second-order anomalous Hall response induced by the chiral anomaly, which is given by
𝐉^CNLAH=-e^2/ħ∑_χ,𝐤𝒟^χ_𝐤 g^χ_𝐤 (𝐄×𝐯^χ_𝐤)
To evaluate all the different responses, 𝐉^CNLAH is written as:
J_α = ∑_β, χσ^χ_αβ E_βE_β,
with, α, β = x,y,z. Comparison of Eq. <ref> and Eq. <ref> gives different components of nonlinear conductivity tensor (σ^χ_αβ). For, 𝐄 =E ẑ, the anomalous velocity (𝐯^χ_anom∼𝐄×Ω^χ_𝐤) has components in xy-plane. Since we rotate the magnetic field in xz-plane, we measure the Hall response along the y-direction, i.e., we evaluate σ_zy. A component of the nonlinear current is also generated along the x-direction (σ_zx), which contributes to the planar nonlinear Hall effect, and is seen to vanish.
Moving on, we define the chiral scattering rate as follows:
1/τ^χ(θ,ϕ)=∑_χ'𝒱∫d^3𝐤'/(2π)^3(𝒟^χ'_𝐤')^-1𝐖^χχ'_𝐤 𝐤'.
𝐖^χχ'_𝐤 𝐤' is defined in Eq. <ref> and the corresponding overlap of the Bloch wave function is given by the following expression:
𝒢^χχ'(θ,ϕ) = [1+χχ'(cosθcosθ' + sinθsinθ'cosϕcosϕ' + sinθsinθ'sinϕsinϕ']. Note that this expression for 𝒢^χχ'(θ,ϕ) holds for both our systems of interest: WSM and SOC-NCM. For chiral particles with a different spinor structure, 𝒢^χχ'(θ,ϕ) should be appropriately modified.
Taking Berry phase into account and corresponding change in density of states, ∑_k⟶𝒱∫d^3𝐤/(2π)^3𝒟^χ_𝐤, Eq. <ref> becomes:
l^χ(θ,ϕ) + Λ^χ(θ,ϕ)/τ^χ(θ,ϕ)
=∑_χ'𝒱∫d^3𝐤'/(2π)^3𝒟^χ'_𝐤'𝐖^χχ'_𝐤 𝐤'Λ^χ'(θ',ϕ').
Here, l^χ(θ,ϕ)=𝒟^χ_𝐤[v^χ_z,𝐤+eBsinγ(Ω^χ_k·𝐯^χ_𝐤)/
ħ] evaluated at the Fermi surface. Eq. <ref> and right-hand side of Eq. <ref> is reduced to integration over θ' and ϕ',
1/τ^χ(θ,ϕ) = 𝒱∑_χ'Π^χχ'∬(k')^3sinθ'/|𝐯^χ'_k'·𝐤'^χ'|dθ'dϕ' 𝒢^χχ'(𝒟^χ'_𝐤')^-1.
𝒱∑_χ'Π^χχ'∬ f^χ'(θ',ϕ') 𝒢^χχ' dθ' dϕ'×[d^χ' - l^χ'(θ',ϕ')
+ a^χ'cosθ' + b^χ'sinθ' cosϕ' + c^χ'sinθ'cosϕ'],
where, Π^χχ' = N|U^χχ'|^2 / 4π^2 ħ^2 and f^χ (θ,ϕ)=(k)^3/|𝐯^χ_𝐤·𝐤^χ|sinθ (𝒟^η_𝐤)^-1τ^χ(θ,ϕ). Using the ansatz Λ^χ_𝐤=[d^χ-l^χ(θ,ϕ) + a^χcosϕ +b^χsinθcosϕ+c^χsinθsinϕ]τ^χ(θ,ϕ), the above equation can be written in following form:
d^χ+a^χcosϕ+b^χsinθcosϕ+c^χsinθsinϕ
=𝒱∑_χ'Π^χχ'∬ f^χ'(θ',ϕ')dθ'dϕ'
×[d^χ'-l^χ'(θ',ϕ')+a^χ'cosθ'+b^χ'sinθ'cosϕ'+c^χ'sinθ'sinϕ'].
When this equation is explicitly written, it appears as seven simultaneous equations that must be solved for eight variables. The particle number conservation provides an additional restriction.
∑_χ∑_𝐤 g^χ_𝐤 = 0
For the eight unknowns (d^± 1, a^± 1, b^± 1, c^± 1), Eq. <ref> and Eq. <ref> are simultaneously solved with Eq. <ref>. The nonlinear Hall conductivity induced by chiral anomaly is then evaluated using Eq. <ref> and Eq. <ref>.
§ CHIRAL NONLINEAR ANOMALOUS HALL EFFECT IN WSMS
§.§ Low-energy Hamiltonian
We begin with the following low-energy Hamiltonian of a Weyl semimetal:
H_WSM(𝐤)=
∑_χ=±1χħ v_F𝐤·σ + ħ v_F(t_z^χ k_z + t_x^χ k_x )I_2×2,
where χ is the chirality of the Weyl node, ħ is the reduced Plank constant, v_F is the Fermi velocity, 𝐤 is the wave vector measured from the Weyl node, σ is vector of Pauli matrices, and t_x,z are the tilt parameters. The energy dispersion is given by:
ϵ^χ_k=±ħ v_F|k| + ħ v_F(t^χ_z k_z+t^χ_x k_x).
The constant energy Fermi contour, which is the locus of all points with constant energy ϵ, can be then evaluated to be
k^χ = ϵ+√(ϵ^2 - n^χχ e v_F B β_θϕ)/n^χ.
Here, n^χ = 2 ħ v_F + 2 t_xħ v_Fsinθcosϕ + 2 t_zħ v_Fcosθ, and β_θϕ = sinθcosϕcosγ + cosθsinγ.
The topological nature of following Bloch states of Hamiltonian in Eq. <ref>: |u^+⟩^T=[e^-iϕcos(θ /2),sin(θ/2)], |u^-⟩^T=[-e^-iϕsin(θ /2),cos(θ/2)], gives rise nonzero flux of the Berry curvature Ω^χ_𝐤=-χ𝐤/2k^3, and its self-rotation gives the anomalous orbital magnetic moment (OMM) 𝐦^χ_𝐤=-χ e v_F𝐤/2k^2. Due to the anomalous orbital magnetic moment, the energy dispersion is modified in the presence of the external magnetic field: ϵ^χ_k →ϵ_k - 𝐦^χ_𝐤·𝐁. This changes the spherical Fermi surface to an egg-shaped Fermi surface as schematically displayed in Fig. <ref>.
Due to change in the dispersion, the band velocity components are also altered, which we evaluate to be:
v^χ_x =v_Fk_x/k+v_Ft^χ_x
+u^χ_2/k^2(cosγ(1-2k^2_x/k^2)+sinγ(-2k_x k_z/k^2)),
v^χ_y =v_Fk_y/k
+u^χ_2/k^2(cosγ(-2k_x k_y/k^2)+sinγ(-2k_y k_z/k^2)),
v^χ_z =v_Fk_z/k+v_Ft^χ_z
+u^χ_2/k^2(cosγ(-2k_x k_z/k^2)+sinγ(1-2k^2_z/k^2)),
with u^χ_2=χ e v_F B/2 ħ.
§.§ Weak and strong sign-reversal
The sign of longitudinal magnetoconductivity in Weyl materials has been intensely investigated in prior literature. While chiral anomaly in untilted Weyl semimetals is predicted to show positive LMC for weak internode scattering, it reverses sign for sufficiently strong internode scattering <cit.>. On the other hand, even a small amount of tilting in the Weyl cone can result in negative LMC along a particular direction of the magnetic field even for weak internode scattering. However, the reversal in sign in these two cases is fundamentally quite different, which leads to the classification of `strong-sign-reversal' and `weak-sign-reversal' as defined in Ref. <cit.>. We briefly review it here. A general expression for the magnetoconductivity tensor can be written as <cit.>
σ_ij(B)= σ_ij^(2)+ σ_ij^(0) (B-B_0)^2,
which incorporates (i) normal quadratic B-dependence, (ii) linear-in-B dependence and sign change along a particular direction of the magnetic field, and (iii) quadratic-in-B dependence with negative sign, in a single framework.
The features characterizing `weak-sign-reversal' include (i) B_0 ≠ 0, (ii) σ_ij^(0)≠σ_ij(B=0), and (iii) sign σ_ij^(2)>0. In this case, the vertex of the magnetoconductivity parabola is shifted from the origin, and the conductivity is of different signs for small positive and negative magnetic fields. However, the orientation of the parabola is still positive, i.e., sign (σ_ij^(2))>0.
`Strong-sign-reversal' is characterized by sign (σ_ij^(2))<0, which implies a complete reversal of the orientation of the parabola. Tilting of Weyl cones can result in `weak-sign-reversal' while intervalley scattering or strain is generally expected to result in `strong-sign-reversal' <cit.>. Fig. <ref> schematically explains the distinction between the two cases.
§.§ Nonlinear anomalous Hall conductivity
We are now in a position to discuss the chiral anomaly-induced nonlinear anomalous Hall conductivity in WSMs.
In the absence of any Weyl cone tilting and effects of orbital magnetic moment, the net anomalous velocity vector vanishes for each node resulting in zero nonlinear anomalous Hall conductivity. When the Weyl cones are untilted and when the effect of orbital magnetic moment is excluded, from symmetry considerations it is easy to conclude that the net CNLAH current vanishes at each node.
When effects of the orbital magnetic moment are included, the Fermi surface becomes egg-shaped (see Fig. <ref>) and the net anomalous velocity vector does not vanish, resulting in a nonzero CNLAH current at each node. However, the net current vanishes when the contribution from both nodes is added up because the current at both nodes is of equal magnitudes and opposite signs. Now, when the Weyl cones are further tilted, the nonlinear Hall current at both nodes is unequal in magnitude, resulting in a net non-zero CNLAH current. Fig. <ref> schematically presents cross-sectional views of the Fermi surface of a Weyl semimetal, highlighting the mechanism resulting in a nonvanishing CNLAH current.
In Fig. <ref>, we plot the CNLAH conductivity σ_zy as function of t_x and α for γ=π/2 (parallel electric and magnetic fields). We first note that the CNLAH conductivity is an odd function of tilt t_x, in accordance with the findings of Ref. <cit.>. However, we find that CNLAH conductivity is non-monotonic as a function of tilt t_x, which is in striking contrast to Ref. <cit.> that reports monotonic behavior with respect to t_x.
We find that even small intervalley scattering results in non-monotonicity. The CNLAH conductivity first increases as a function of t_x and then decreases after reaching a maximum. Furthermore, for a fixed value of t_x, increasing the internode scattering strength α, the magnitude of the CNLAH conductivity decreases and eventually flips sign after a critical value α_c, i.e. displays `strong-sign-reversal'. These twin effects cause a prominent `half-lung' like pattern as shown in Fig. <ref>(a). In Fig. <ref>(b), we plot CNLAH conductivity as a function of B at different values of the internode scattering strength α for fixed tilt. It is clear that increasing α results in sign-reversal of σ_zy. Now, fixing α and increasing the amount of tilt, σ_zy also changes sign as shown in Fig. <ref>(c). The non-monotonicity of the CNLAH effect and the existence of `strong-sign-reversal' are the prominent features we discover, which have been unreported so far. We attribute these to the effects of chirality-violating scattering and global charge conservation that have not been correctly accounted for in earlier studies.
To gain further insight, we plot σ_zy as a function of t_x in Fig. <ref> (a) for different values of the internode scattering strength α. The conductivity is highly non-monotonic–it first increases as a function of t_x and decreases and eventually becomes close to zero when t_x≈ 1. Interestingly, we discover that as α is increased, (i) the conductivity increases, (ii) then quickly falls to zero for some value of t_x<1, (iii) then becomes negative, and (iv) finally approaches zero again when t_x≈ 1. When α is increased, σ_zy falls to zero and becomes negative at smaller and smaller values of t_x. When α is large enough, the conductivity σ_zy eventually reverses sign at t_x≈ 0.
In Fig. <ref> (b), we plot σ_zy as a function of α for different values of the t_x. Remarkably, we find a sweet-spot at α≈ 1/3, where σ_zy≈ 0 for all values of t_x≲ 0.25 v_F.
Having discussed the CNLAH conductivity for collinear electric and magnetic fields and the effect of α, we now discuss the case when 𝐄 and 𝐁 are noncollinear, since in many experimental setups, the effect of rotating the magnetic field is investigated. In Fig. <ref>(a) we plot σ_zy as a function of tilt t_x and γ for a finite value of α. The conductivity is an odd function of t_x, and is a non-monotonic function of the tilt for all values of γ. In Fig. <ref>(b), we plot σ_zy as a function of tilt α and γ for two different fixed values of value of t_x. For all angles of the magnetic field γ, the conductivity shows strong-sign reversal as a function of the intervalley scattering strength α, however, the dependence of the zero-conductivity contour (separating the regions of positive and negative conductivity) is seen to be weak, unlike linear longitudinal magnetoconductivity that shows a stronger dependence on γ <cit.>. The dependence of the zero-conductivity contour is stronger on t_x, as we also note from Fig. <ref>.
§.§ Effects of strain
We next discuss the effect of strain on the nonlinear anomalous Hall conductivity. In a topological protected Weyl semimetal, Weyl nodes are separated in the momentum space by a finite vector 𝐛. The vector 𝐛 is also interpreted as an axial gauge field because of its opposite coupling to Weyl nodes of opposing chiralities <cit.>. A position-dependent 𝐛 vector generates an axial magnetic field (denoted as 𝐁_5=∇×𝐛), which also couples oppositely to Weyl nodes of opposite chirality. Such a scenario can arise if Weyl semimetals are subjected to an inhomogeneous strain profile. The effective magnetic field experienced by a fermion at node χ is therefore 𝐁⟶𝐁+χ𝐁_5. Recent works have studied the role of strain in longitudinal and planar Hall conductivity <cit.>, however, its role in the nonlinear anomalous Hall conductivity remains unexplored.
In Fig. <ref> we plot the nonlinear Hall conductivity σ_zy as a function of the strain induced 𝐁_5 field. As the intervalley scattering strength is increased the conductivity is suppressed. However, unlike Fig. <ref>, where we examined the effect of an external magnetic field, there is no strong-sign-reversal for large values of the intervalley scattering strength.
Furthermore, we find that strain-induced nonlinear σ_zy also changes sign as a function of the tilt parameter as seen in Fig. <ref> (b).
The measurement of nonlinear anomalous Hall conductivity in Weyl semimetals (i) in the presence of strain but the absence of magnetic field, and (ii) in the absence of strain but the presence of external magnetic field, can provide us crucial insights into the role and strength of internode scattering. For instance, if the measured nonlinear conductivity is negative in both scenarios, it is strongly suggestive of large internode scattering. Conversely, if the conductivity is positive in one case and negative in the other, it is indicative of weak internode scattering.
§ CNLAH IN SPIN-ORBIT COUPLED NONCENTROSYMMETRIC METALS
§.§ Low-energy Hamiltonian
We begin with the following low-energy Hamiltonian of a spin-orbit coupled noncentrosymmetric metal expanded near the high-symmetry point <cit.>
H(𝐤) = ħ^2 k^2/2m + ħϑ𝐤·σ
were m is the effective mass, ϑ incorporates the spin-orbit coupling parameter. We couple the system to a Zeeman field given by <cit.>
H_z = -𝐌·σ,
where 𝐌 is related to the external magnetic field by 𝐌 = -gμ_B 𝐁/2, where μ_B is the Bohr magneton and g is Landé g-factor (g ∼ 50 <cit.>). The resultant Hamiltonian, including the Zeeman field, becomes
H(𝐤) = ħ^2 k^2/2m + ħϑ(𝐤+𝐌/ħϑ)·σ,
A change of variables (𝐤→𝐤-𝐌/ħϑ) yields
H(𝐤) = ħ^2k^2/2m + ħϑ𝐤·σ + ħϑ(k_x t_x + k_zt_z) + E_0,
where E_0 = M^2/2mϑ^2 is an irrelevant constant energy shift, t_x = -M_x/mϑ^2, t_z = -M_z/mϑ^2. Remarkably, in the new reference frame, the Hamiltonian resembles that of a tilted Weyl semimetal! The effect of the Zeeman coupling is therefore to tilt the Fermi surfaces just like the tilted Weyl cones of a Weyl semimetal. Note that the effective tilt is proportional to the amount of Zeeman coupling and inversely proportional to the effective mass m. Therefore, for a purely relativistic 𝐤·σ Hamiltonian, where the effective mass term ∼ k^2/m→ 0, the effective tilt term vanishes. Hence this property of the noncentrosymmetric metal is distinct from an inversion asymmetric Weyl semimetal where the effective mass term ∼ k^2/m→ 0 is absent.
The energy spectrum is evaluated to be:
ϵ_𝐤^λ = ħ^2 k^2/2 m + λħϑk
+ħϑ (k_x t_x + k_z t_z) + E_0,
with, λ=±1 representing two spin-orbit split bands. We note that both the Fermi surfaces are tilted along the same direction as a result of the Zeeman field. To obtain the constant energy Fermi contour, we need to add the orbital magnetic moment coupling to the energy spectrum and invert Eq. <ref>. This yields a cubic equation in k that needs to be solved for k=k(θ,ϕ). Since the analytical expression is lengthy and uninteresting, we do not provide it here. The change of variables is implemented straightforwardly in the Boltzmann equation. The Jacobian remains invariant, but the constant energy Fermi contour is appropriately modified while integrating over a constant energy surface in the Boltzmann equation.
Without loss of generality, we assume that the chemical potential lies above the nodal point 𝐤=0, and hence the Fermi surface is composed of two disjointed surfaces as shown in Fig. <ref>. Both the surfaces enclose a nontrivial flux of Berry curvature, which is of the same magnitude but opposite sign. This is similar to the case of a Weyl semimetal where the Berry curvature is of the same magnitude but opposite signs at the two valleys. Interestingly, in SOC-NCM, the anomalous orbital magnetic moment
(𝐦^λ_𝐤) has the same sign and magnitude, which is different from a Weyl semimetal where the signs are reversed at the two nodes. With the application of an external magnetic field, the orbital magnetic moment couples to it as -𝐦^λ_𝐤·𝐁, leading to the oval-shaped Fermi surfaces as shown in Fig. <ref>. In Weyl semimetal, the coupling is opposite in the two valleys, and thus, the shapes of Fermi the surfaces are reversed (see Fig. <ref>).
§.§ Nonlinear anomalous Hall conductivity
In Fig. <ref> (a) and (b), we plot a cross-sectional view of the Fermi surface of the SOC-NCM including the effect of the orbital magnetic moment but without considering Zeeman coupling (which is the effective tilt term). The net anomalous velocity vector (∼𝐄×Ω_𝐤) at both the nodes does not vanish since the Fermi surface is no longer symmetrical around the k_x-k_y plane. Furthermore, the magnitudes of the CNLAH current at the two nodes are not of equal magnitudes (unlike the case of a WSM). This results in a net nonzero CNLAH current, unlike the WSM where the orbital magnetic moment alone does not result in a nonzero current. Fig. <ref> (c) and (d) depict the effect of including Zeeman coupling, which further introduces an asymmetry in the Fermi surface and enhances the total CNLAH current.
In Fig. <ref>, we plot the nonlinear anomalous Hall conductivity for spin-orbit noncentrosymmetric metal described in Eq. <ref>. We find that the nonlinear conductivity σ_zy is quadratic in the magnetic field, in contrast to a Weyl semimetal where σ_zy is seen to be linear in B. An additional B-dependence enters in SOC-NCMs because (i) the current here is driven by anomalous orbital magnetic moment unlike in WSM where it is driven by a finite tilt, and (ii) the generated effective tilt due to the Zeeman coupling is field-dependent, unlike in WSMs, where a constant tilt that is inherent to the bandstructure is assumed.
Furthermore, in SOC-NCMs, we find the conductivity σ_zy to be negative, which is suppressed with increasing intervalley scattering strength, but importantly does not flip its sign. This is contrasted to the case of Weyl semimetal where a strong-sign-reversal is observed. This crucial difference is attributed to the different nature of the orbital magnetic moment in both cases.
The behavior of σ_zy with the angle of the magnetic field γ in the current case is also of special interest. Unlike the case of Weyl semimetal, where σ_zy is maximum when γ=π/2 (when the electric and magnetic field are parallel to each other), in SOC-NCMs, the conductivity is maximum when γ=π/4, i.e., the electric and magnetic field are at π/4 angle with respect to each other.
This important distinction is understood as follows. First, we observe that in WSMs, the current is driven by a finite tilt of the Weyl cones along the k_x direction. Therefore the current is maximum when the magnetic field points along ẑ-direction, parallel to the electric field. In SOC-NCMs, the current is driven by anomalous magnetic moment.
Now, when 𝐄∥𝐁∥ẑ, integral of the anomalous velocity vector (v_anom∝Ω_x∝cosϕ) vanishes due to azimuthal symmetry, and therefore the net anomalous current vanishes as well. In WSMs, azimuthal symmetry is destroyed due to a finite t_x, even when 𝐄∥𝐁∥ẑ.
§ CONCLUSIONS
In this work, we advance the theoretical understanding of the chiral anomaly-induced nonlinear anomalous Hall effect (CNLAHE) in three-dimensional chiral fermionic systems, with a particular focus on Weyl semimetals (WSMs) and spin-orbit coupled noncentrosymmetric metals (SOC-NCMs). By rigorously incorporating momentum-dependent chirality-preserving and chirality-breaking scattering processes, as well as global charge conservation, we address critical gaps in the existing models, thereby providing a more robust and comprehensive framework for analyzing CNLAHE.
In the context of Weyl semimetals, we uncover a complex, nonmonotonic relationship between the nonlinear anomalous Hall conductivity and the Weyl cone tilt. This behavior is notably sensitive to the strength of internode scattering, leading to a `strong-sign-reversal' of the conductivity. Moreover, we also investigate the effects of strain-induced chiral gauge fields on CNLAHE, demonstrating that while such strain can indeed generate nonlinear Hall effects, it does so without inducing a sign reversal in the conductivity. Experiments performed with and without external strain in WSMs can shed light on the role of internode scattering by comparing the nonlinear anomalous Hall conductivity (NLAHC) in both scenarios.
For spin-orbit coupled noncentrosymmetric metals, we reveal that the anomalous orbital magnetic moment is sufficient to drive a large nonlinear conductivity, which is distinguished by its negative sign, regardless of the strength of interband scattering, and its quadratic dependence on the magnetic field. This behavior starkly contrasts with the linear magnetic field dependence observed in WSMs and highlights the fundamental differences between these two classes of materials. We also identify the Zeeman coupling of the magnetic field as a crucial factor that acts as an effective tilt term, further amplifying the CNLAHE in SOC-NCMs.
The theoretical insights presented in this work extend the current understanding of CNLAHE in chiral quasiparticles and provide a critical foundation for current and upcoming experimental investigations.
|
http://arxiv.org/abs/2409.02821v1 | 20240904154134 | Establishing CP violation in $b$-baryon decays | [
"Ji-Xin Yu",
"Jia-Jie Han",
"Ya Li",
"Hsiang-nan Li",
"Zhen-Jun Xiao",
"Fu-Sheng Yu"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
^1MOE Frontiers Science Center for Rare Isotopes, and School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, People’s Republic of China
^2Department of Physics, College of Sciences, Nanjing Agricultural University, Nanjing 210095, People’s Republic of China
^3Institute of Physics, Academia Sinica, Taipei, Taiwan 115, Republic of China
^4Department of Physics and Institute of Theoretical Physics, Nanjing Normal University, Nanjing 210023, People’s Republic of China
§ ABSTRACT
The CP violation (CPV) in the baryon system has not yet been definitively established. We demonstrate that individual partial-wave CPV in the Λ_b→ pπ^-,pK^- decays can exceed 10%, but the destruction between different partial waves results in small net direct CPV as observed in current experiments. There is thus high possibility of identifying CPV in b-baryon decays through measurements of partial-wave CPV. The above observation is supported by the first full QCD calculation of two-body hadronic Λ_b baryon decays with controllable uncertainties in the perturbative QCD formalism.
xxxx
Establishing CP violation in b-baryon decays
Ji-Xin Yu^1,
Jia-Jie Han^1 [Corresponding author, Email: [email protected]],
Ya Li^2 [Corresponding author, Email: [email protected]],
Hsiang-nan Li^3,
Zhen-Jun Xiao^4,
Fu-Sheng Yu^1 [Corresponding author, Email: [email protected]]
============================================================================================================================================================================================================================================
Introduction.—
The CP violation (CPV) plays a crucial role in explaining the matter-antimatter asymmetry in the Universe and in searching for New Physics.
The CPVs in K<cit.>, B<cit.> and D<cit.> meson decays, which are attributed to an irreducible phase in the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing matrix, have been well established and found to be consistent with Standard Model (SM) predictions. By contrast, the CPV in the baryon system has not yet been identified, and
numerous experiments have been conducted to search for baryon CPV. Recent efforts by BESIII yielded the most precise hyperon decay asymmetry A_CP^α (Λ→ p π^-) = -0.002±0.004<cit.>. LHCb achieved the most precise measurement of CPV in charm baryon decays, A_CP (Λ_c→ pK^+K^-) - A_CP (Λ_c→ pπ^+π^-) = 0.003± 0.011<cit.>. Nevertheless, the SM predictions for CPVs in hyperons and charm baryons are one or two orders of magnitude lower than current experimental sensitivities.
Bottom hadron decays involving a relatively large weak phase allow CPV at order of 10%, which has been confirmed in B meson decays. On the contrary, measurements of CPV in two-body Λ_b baryon decays gave<cit.>
A_CP(Λ_b→ pπ^-) =-0.025±0.029,
A_CP(Λ_b→ pK^-) =-0.025±0.022,
compatible with null asymmetries within precision of 1%. That is,
The CPV in Λ_b baryon decays is much lower than in similar B meson decays, although both are induced by the b→ uu̅q transition, q=d,s.
The discrepancy remains a puzzle in heavy flavor physics.
It seems that the dynamics in baryon and meson processes differs significantly, but there is a lack of convincing explanations for this distinction. As a consequence, CPV in other baryon decay modes cannot be predicted accurately either.
A Λ_b baryon decay is a multi-scale process, and involves more diagrams owing to an additional spectator quark compared to a B meson decay.
This results in lots of W-exchange topological diagrams and abundant sources of strong phases required for direct CPV.
A precise evaluation of the strong phases in these topological diagrams poses a challenge in theory.
Three popular theoretical approaches to studies of two-body hadronic B meson decays have been developed, known as the QCD
factorization (QCDF)<cit.>, the soft-collinear-effective theory (SCET)<cit.> and the perturbative QCD (PQCD) factorization<cit.>.
The QCDF and SCET are based on the collinear factorization theorem, in which B meson transition form factors develop an endpoint singularity if they were computed perturbatively.
The PQCD is based on the k_T factorization theorem, in which the endpoint contribution is absorbed into a transverse-momentum-dependent distribution amplitude (DA) or resummed into a Sudakov factor.
The factorizable and nonfactorizable emission, W-exchange and annihilation diagrams are calculable in this framework free of the endpoint singularities.
The CPV of two-body hadronic B meson decays has been successfully predicted in PQCD<cit.>.
Recently, the Λ_b→ p transition form factors with reasonable high-twist hadron DAs are reproduced in PQCD, and the results agree with those from lattice QCD and other nonperturbative methods<cit.>.
Various exclusive heavy baryon decays can thus be analyzed systematically.
We will extend the above well-established PQCD formalism to hadronic Λ_b decays.
Our full QCD calculation, including all the factorizable and nonfactorizable topological diagrams, demonstrates the presence of large partial-wave CPV, greater than 10%, in the Λ_b→ pπ^- decay. This amount is close to that in the corresponding B meson decay, but the cancellation between different partial waves turns in small net direct CPV.
The P-wave CPV in the penguin-dominant Λ_b→ pK^- decay can also exceed 10%. However, its CPV is governed by the S-wave, which is only at the percent level.
We further predict the CPVs in the Λ_b→ pρ^-, pK^∗ -, pa_1^-(1260), pK_1^-(1270) and pK_1^-(1400) decays, examining their partial-wave CPVs. Overall speaking, the partial-wave CPV can reach 10%. Our investigation sheds light on the dynamical distinction between CPVs in bottom baryon and meson decays, and suggests high possibility of detecting baryon CPV through partial-wave CPV measurements.
Λ_b decay in the PQCD.—
Unlike meson decays, the decay amplitude of a baryon with non-zero spin is decomposed into two different structures.
For the Λ_b→ p h decays, h=π^-, K^-, the amplitudes can be expressed as,
ℳ(Λ_b→ ph)=iu̅_p (f_1+f_2γ_5)u_Λ_b.
where u_p and u_Λ_b represent the proton and Λ_b baryon spinors, respectively.
The partial-wave amplitudes f_1 and f_2 correspond to the parity-violating S-wave and parity-conserving P-wave, associated with the terms 1 and γ_5, respectively.
The partial-wave amplitudes f_1,2 receive contributions from tree operators and penguin operators,
f_1 =|f_1^T|e^iϕ^Te^iδ_1^T + |f_1^P|e^iϕ^Pe^iδ_1^P,
f_2 =|f_2^T|e^iϕ^Te^iδ_2^T + |f_2^P|e^iϕ^Pe^iδ_2^P,
where the superscripts T,P denote the tree and penguin contributions,
the weak phase ϕ from the CKM matrix takes the same value for the S- and P-waves, and
the strong phase δ varies with different partial-wave amplitudes. The direct CPV in the Λ_b→ pπ^-,pK^- decays is then defined as
A_CP(Λ_b→ ph)≡Br(Λ_b→ ph) - Br(Λ̅_b→p̅h̅)/Br(Λ_b→ ph) + Br(Λ̅_b→p̅h̅)
=-2{A|f_1^T|^2 r_1 sinΔϕsinΔδ_1 + B|f_2^T|^2 r_2 sinΔϕsinΔδ_2 }
/ {A|f_1^T|^2 (1+r_1^2+2r_1 cosΔϕcosΔδ_1)
+ B|f_2^T|^2 (1+r_2^2+2r_2 cosΔϕcosΔδ_2)}.
Here r_1,2≡ |f_1,2^P|/|f_1,2^T| denote the ratios of penguin over tree contributions, A=((M_Λ_b+M_p)^2-M_h^2)/M_Λ_b^2, B=((M_Λ_b-M_p)^2-M_h^2)/M_Λ_b^2, Δϕ≡ϕ^P-ϕ^T, Δδ_1,2≡δ^P_1,2-δ^T_1,2.
A strong phase arises from the on-shellness of internal particles in Feynman diagrams, which differs between the parity-conserving and parity-violating contributions. This allows us to define the partial-wave CPV,
A_CP^S =-2r_1 sinΔϕsinΔδ_1/1+r_1^2+2r_1cosΔϕcosΔδ_1,
A_CP^P =-2r_2 sinΔϕsinΔδ_2/1+r_2^2+2r_2cosΔϕcosΔδ_2.
In the PQCD framework, a decay amplitude is expressed as a convolution of hadron DAs, hard scattering amplitudes H, Sudakov factors and jet functions as described in Fig. <ref>, and
formulated as
ℳ(Λ_b→ ph)=∫_0^1[dx] [dx^'] dy ∫ [db] [db^'] db_q
H([x],[x^'],y,[b],[b^'],b_q,μ) S_t([x],[x^'],y)
ϕ_Λ_b([x],[b],μ) ϕ_p([x^'],[b^'],μ) ϕ_h(y,b_q,μ)
e^-S_Λ_b([x],[b]) e^-S_p([x],[b^']) e^-S_h(y,[b_q]).
The hadron DAs are inputted from Refs. <cit.> for the Λ_b baryon, Refs. <cit.> for the proton and Refs. <cit.> for the pesudoscalar mesons.
Compared to meson decays, more types of topological diagrams contribute to the Λ_b→ pπ^-,pK^- decays. The exchange of two hard gluons is necessary for H at leading order in α_s to ensure the two light spectator quarks in the Λ_b baryon to form the energetic final state. A typical diagram responsible for the Λ_b→ pπ^- decay is displayed in Fig. <ref>. We evaluate the contributions from all diagrams to the Λ_b→ pπ^-,pK^- decays, and summarize the outcomes in Table. <ref> and <ref>, respectively. For clarity, we list only central values.
Discussion.—
Tables. <ref> and <ref> manifest the hierarchy r_1 ≫ r_2 in the Λ_b → ph decays, where the contributions from the factorizable penguin diagrams P_f^C_1 dominate.
The S- and P-wave amplitudes P_f^C_1 are expressed as
f_1(P_f^C_1)= -G_F/√(2)f_hV_tbV_td^∗ (C_3/3+C_4+ C_9/3+C_10
+R_1^h(C_5/3+C_6+C_7/3+C_8))
[F_1(m_h^2)(M_Λ_b-M_p)+F_3(m_h^2)m_h^2]
f_2(P_f^C_1)= -G_F/√(2)f_hV_tbV_td^∗ (C_3/3+C_4+ C_9/3+C_10
-R_2^h(C_5/3+C_6+C_7/3+C_8))
[G_1(m_h^2)(M_Λ_b+M_p)-G_3(m_h^2)m_h^2]
where the form factors F_1,2,3 and G_1,2,3 are defined in terms of ⟨ p|u̅γ_μ b |Λ_b⟩ = p̅(F_1γ_μ +F_2iσ_μνq^ν + F_3q_μ)Λ_b and ⟨ p|u̅γ_μγ_5 b |Λ_b⟩ = p̅(G_1γ_μ +G_2iσ_μνq^ν + G_3q_μ)γ_5Λ_b, and the chiral factors are given by R_1=2m_h^2/[(m_b-m_u)(m_u+m_q)] and R_2=2m_h^2/[(m_b+m_u)(m_u+m_q)] with R_1^π≈ R_2^π≈ 1.01 and R_1^K≈ R_2^K≈ 0.89 .
Since the negative sign of R_2 in Eq. (<ref>) induces cancellations among different Wilson coefficients, the term f_2(P_f^C_1) and the ratio r_2 are suppressed.
The calculated branching fractions and CPVs of the Λ_b→ pπ^-,pK^- decays are presented in Table. <ref>.
It is worth mentioning that the magnitudes of CPV are small, consistent with the experimental measurements.
Note that the partial-wave CPV of the Λ_b→ pπ^- decay can exceed 10%, similar to those in B meson decays.
However, the opposite signs of the partial-wave contributions leads to the small direct CPV in this mode.
The topology P^C^', which contains
40 Feynman diagrams, gives the most significant penguin contributions. Among these Feynman diagrams, Fig. <ref> is the largest, whose strong phases exhibit an almost 180^∘ difference between the S- and P-wave as indicated in Table. <ref>.
For the Λ_b→ pK^- mode, the ratios
r_1=4.94 and r_2=0.33 imply that
the direct CPV is determined by the S-wave.
Unlike the Λ_b→ pπ^- decay, the Λ_b→ pK^- decay lacks the P^C^' topology, such that the total penguin contributions are dominated by the factorizable penguin diagrams. These diagrams generate a small strong phase difference for the S-wave, i.e., a small S-wave CPV A_CP^S(Λ_b→ pK^-)=-0.05, and consequently a small direct CPV.
As indicated in Table <ref>, the partial-wave CPV can be large in magnitude, with A_CP^S(Λ_b→ pπ^-)=0.17 and A_CP^P(Λ_b→ pK^-)=-0.23. These large partial-wave CPVs closely resemble the corresponding processes in B meson decays.
The partial-wave CPVs of baryon decays are directly related to the asymmetry parameters α, β and γ <cit.>, which can be probed experimentally to search for baryon CPVs.
Table <ref> also provides our predictions for the decay asymmetry parameters and their associated CPVs for further measurements at LHCb.
The cancellation between partial-wave CPVs as a differentiation between b-baryon and b-meson decays is the main highlight of the Letter.
In order to explore the potential enhancements of partial-wave CPVs, we have also analyzed the decays Λ_b→ pρ^-,pK^∗ - with vector final states, and Λ_b→ pa_1^-(1260),pK_1^-(1270),pK_1^-(1400) with axial-vector final states in the PQCD approach.
These modes involve four independent partial-wave amplitudes or helicity amplitudes.
They share the same topological diagrams as the Λ_b→ pπ^-,pK^- decays, but with different meson DAs.
The predictions for the CPVs in the above decays are shown in Table. <ref>.
It is found that the CPVs of Λ_b→ pρ^-,pa_1^-(1260) are small, while the others are relatively large.
These modes are actually three-body or four-body decays Λ_b→ p π^- π^0, p K_S^0 π^- or p K^- π^0, p π^+ π^- π^-, and p K^- π^+ π^-, all of which have large data sample at LHCb; the three-body decays have about 4000 events, and the four-body decays have about 20000 and 90000 events, respectively.
Furthermore, multi-body decays through two or more intermediate resonances may produce substantial interference effects, resulting in notable regional CPVs. Hence, there is a big chance to observe CPVs higher than 20% in these modes at LHCb. The rich data samples and complicated dynamics in multi-body decays offer promising opportunities to establish CPVs in bottom baryon decays.
Conclusions.—
This Letter presented the first full QCD dynamical analysis on two-body hadronic Λ_b baryon decays in the PQCD approach. Our study elucidates the reason for the observed small CPVs in the Λ_b→ pπ^-,pK^- decays, in contrast to the sizable CPVs in the similar B meson decays. The partial-wave CPVs in the Λ_b→ pπ^- decay could reach 10% potentially, but the destruction between them leads to the small CPV. The direct CPV of the Λ_b→ pK^- mode is primarily attributed to the modest S-wave CPV.
We have also extended our analysis by investigating the CPVs in the channels with vector and axial-vector final states.
Our predictions suggest that certain partial-wave CPVs in bottom baryon decays can be large enough, and probed experimentally to search for baryon CPVs. This work opens up avenues for deeply understanding the dynamics involved in baryon decays and for unveiling CPV in these processes.
Acknowledgement.—The authors would like to express their gratitude to Pei-Rong Li for generously providing an access to computing resources. Special thanks are extended to Ding-Yu Shao, Yan-Qing Ma, Jian Wang and Jun Hua for their valuable comments. This work was supported in part by Natural Science Foundation of China under
grant No. 12335003, and by the Fundamental Research Funds for the Central Universities under No. lzujbky-2024-oy02.
11
Christenson:1964fg
J. H. Christenson, J. W. Cronin, V. L. Fitch and R. Turlay,
Phys. Rev. Lett. 13 (1964), 138-140
BaBar:2001ags
B. Aubert et al. [BaBar],
Phys. Rev. Lett. 86 (2001), 2515-2522
[arXiv:hep-ex/0102030 [hep-ex]].
Belle:2001zzw
K. Abe et al. [Belle],
Phys. Rev. Lett. 87 (2001), 091802
[arXiv:hep-ex/0107061 [hep-ex]].
LHCb:2019hro
R. Aaij et al. [LHCb],
Phys. Rev. Lett. 122 (2019) no.21, 211803
[arXiv:1903.08726 [hep-ex]].
BESIII:2021ypr
M. Ablikim et al. [BESIII],
Nature 606, no.7912, 64-69 (2022)
[arXiv:2105.11155 [hep-ex]].
BESIII:2018cnd
M. Ablikim et al. [BESIII],
Nature Phys. 15, 631-634 (2019)
[arXiv:1808.08917 [hep-ex]].
LHCb:2017hwf
R. Aaij et al. [LHCb],
JHEP 03, 182 (2018)
[arXiv:1712.07051 [hep-ex]].
ParticleDataGroup:2022pth
R. L. Workman et al. [Particle Data Group],
PTEP 2022, 083C01 (2022)
Beneke:1999br
M. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda,
Phys. Rev. Lett. 83 (1999), 1914-1917
[arXiv:hep-ph/9905312 [hep-ph]].
Beneke:2000ry
M. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda,
Nucl. Phys. B 591 (2000), 313-418
[arXiv:hep-ph/0006124 [hep-ph]].
Bauer:2000yr
C. W. Bauer, S. Fleming, D. Pirjol and I. W. Stewart,
Phys. Rev. D 63 (2001), 114020
[arXiv:hep-ph/0011336 [hep-ph]].
Bauer:2001yt
C. W. Bauer, D. Pirjol and I. W. Stewart,
Phys. Rev. D 65 (2002), 054022
[arXiv:hep-ph/0109045 [hep-ph]].
Bauer:2002nz
C. W. Bauer, S. Fleming, D. Pirjol, I. Z. Rothstein and I. W. Stewart,
Phys. Rev. D 66 (2002), 014017
[arXiv:hep-ph/0202088 [hep-ph]].
Keum:2000wi
Y. Y. Keum, H. N. Li and A. I. Sanda,
Phys. Rev. D 63 (2001), 054008
[arXiv:hep-ph/0004173 [hep-ph]].
Lu:2000em
C. D. Lu, K. Ukai and M. Z. Yang,
Phys. Rev. D 63 (2001), 074009
[arXiv:hep-ph/0004213 [hep-ph]].
Keum:2000ph
Y. Y. Keum, H. n. Li and A. I. Sanda,
Phys. Lett. B 504 (2001), 6-14
[arXiv:hep-ph/0004004 [hep-ph]].
Han:2022srw
J. J. Han, Y. Li, H. n. Li, Y. L. Shen, Z. J. Xiao and F. S. Yu,
Eur. Phys. J. C 82, no.8, 686 (2022)
[arXiv:2202.04804 [hep-ph]].
Ball:2008fw
P. Ball, V. M. Braun and E. Gardi,
Phys. Lett. B 665, 197-204 (2008)
[arXiv:0804.2424 [hep-ph]].
Bell:2013tfa
G. Bell, T. Feldmann, Y. M. Wang and M. W. Y. Yip,
JHEP 11, 191 (2013)
[arXiv:1308.6114 [hep-ph]].
Braun:2000kw
V. Braun, R. J. Fries, N. Mahnke and E. Stein,
Nucl. Phys. B 589, 381-409 (2000)
[erratum: Nucl. Phys. B 607, 433-433 (2001)]
[arXiv:hep-ph/0007279 [hep-ph]].
Braun:2006hz
V. M. Braun, A. Lenz and M. Wittmann,
Phys. Rev. D 73, 094019 (2006)
[arXiv:hep-ph/0604050 [hep-ph]].
Ball:2004ye
P. Ball and R. Zwicky,
Phys. Rev. D 71, 014015 (2005)
[arXiv:hep-ph/0406232 [hep-ph]].
Ball:2006wn
P. Ball, V. M. Braun and A. Lenz,
JHEP 05, 004 (2006)
[arXiv:hep-ph/0603063 [hep-ph]].
Lee:1957qs
T. D. Lee and C. N. Yang,
Phys. Rev. 108, 1645-1647 (1957)
doi:10.1103/PhysRev.108.1645
|
http://arxiv.org/abs/2409.03423v1 | 20240905110950 | Admissibility Conditions for Multi-window Gabor Frames on Discrete Periodic Sets | [
"Najib Khachiaa",
"Mohamed Rossafi"
] | math.FA | [
"math.FA",
"42C15, 42C40"
] |
Admissibility for Multi-window Gabor Frames]Admissibility Conditions for Multi-window Gabor Frames on Discrete Periodic Sets
[1]Najib [email protected]
These authors contributed equally to this work.
2]Mohamed [email protected]
These authors contributed equally to this work.
[1]Laboratory Partial Differential Equations, Spectral Algebra and Geometry, Department of Mathematics, Faculty of Sciences, University Ibn Tofail, Kenitra, Morocco
[2]Laboratory Partial Differential Equations, Spectral Algebra and Geometry, Higher School of Education and Training, University Ibn Tofail, Kenitra, Morocco
In this paper, 𝒢(g,L,M,N) denotes a L-window Gabor system on a periodic set 𝕊, where L,M,M∈ℕ and g={g_l}_l∈ℕ_L⊂ℓ^2(𝕊). We characterize which g generates a complete multi-window Gabor system and a multi-window Gabor frame 𝒢(g,L,M,N) on 𝕊 using the Zak transform. Admissibility conditions for a periodic set to admit a complete multi–window Gabor system, multi-window Gabor (Parseval) frame, and multi–window Gabor (orthonormal) basis 𝒢(g,L,M,N) are given with respect to the parameters L, M and N.
[MSC Classification]42C15; 42C40.
[
[
September 9, 2024
=====================
§ INTRODUCTION AND PRELIMINARIES
When a signal appears periodically but intermittently, it can be considered within the entire space ℓ^2(ℤ) and analyzed in the standard manner. However, if the signal is only emitted for short periods, this method might not be the best approach. To perform Gabor analysis of the signal most efficiently while preserving all its features, Li and Lian studied single window Gabor systems on discrete periodic sets. They derived density results and frame characterizations. Compared to single window Gabor systems, multiwindow Gabor systems can be both interesting and beneficial, as they allow for more flexibility by using windows of different types and widths. For certain parameters N and M, there does not exist an associated Gabor frame with a single window. However, allowing the use of multiple windows guarantees the existence of Gabor frames. For example, for 𝕊=ℤ, a Gabor frame with one window exists only if N≤ M. N. Khachiaa, M. Rossafi, and S. Kabbaj showed in <cit.> that when it is not the case, by allowing the use of multiple windows, the existence of Gabor frames associated with L-windows is ensured, where L is an integer satisfying N≤ LM (which, of course, exists).
A sequence {f_i}_i∈ℐ, where ℐ is a countable set, in a separable Hilbert space H is said to be frame if there exist 0< A≤ B<∞ ( called frame bounds) such that for all f∈ H,
Af^2≤∑_i∈ℐ|⟨ f,f_i ⟩|^2≤ Bf^2.
If only the upper inequality holds, {f_i}_i∈ℐ is called a Bessel sequence with Bessel bound B. If A=B, the sequence is called a tight frame and if A=B=1, it is called a Parseval frame for H. For more details on frame theory, the reader can refer to <cit.>.
Denote by ℕ the set of positive integers, i.e. ℕ:={1,2,3,...} and for a given K∈ℕ, write ℕ_K:={0,1,...,K-1}. Let N,M,L∈ℕ and p,q∈ℕ such that pgcd(p,q)=1 and N/M=p/q. A nonempty subset 𝕊 of ℤ is said to be Nℤ-periodic set if for all j∈𝕊 and for all n∈ℤ, j+nN ∈𝕊. For K∈ℕ, write 𝕊_K:=𝕊∩ℕ_K. We denote by ℓ^2(𝕊) the closed subspace of ℓ^2(ℤ) defined by,
ℓ^2(𝕊):={f∈ℓ^2(ℤ): f(j)=0 if j∉𝕊}.
Define the modulation operator E_m/M with m∈ℤ and the translation operator T_nN with n∈ℤ for f∈ℓ^2(𝕊) by:
E_m/Mf(.):=e^2π i m/M. f(.), T_nNf(.):=f(.-nN).
The modulation and translation operators are unitary operators of ℓ^2(𝕊).
For g:={g_l}_l∈ℕ_L⊂ℓ^2(𝕊), the associated multiwindow discrete Gabor system (M-D-G) is given by,
𝒢(g,L,M,N):={E_m/MT_nNg_l}_m∈ℕ_M,n∈ℤ, l∈ℕ_L.
For j∈ℤ, we denote 𝒦_j={k∈ℕ_p: j+kM∈𝕊} and 𝒦(j):=diag(χ_𝒦_j(0),χ_𝒦_j(1),...,χ_𝒦_j(p-1)).
Let K∈ℕ. The discrete Zak tansform z_K of f∈ℓ^2(ℤ) for j∈ℤ and a.e θ∈ℝ is defined by,
z_Kf(j,θ):=∑_k∈ℤf(j+kK)e^2π i k θ.
z_Kf is quasi-periodic. i.e. ∀ j, k,l∈ℤ, θ∈ℝ we have:
z_Kf(j+kK,θ+l)=e^-2π ikθz_Kf(j,θ).
Then z_K is, completely, defined by its values for j∈ℕ_M and θ∈ [0,1[. The reader can refer to <cit.> for more details on discrete Zak transform.
This paper is organized as follows. In section 2, we will present some auxiliary lemmas to be used in the following sections. In section 3, we characterize which g∈ℓ^2(𝕊) generates a complete multi-window discrete Gabor system and a multi-window discrete Gabor frame 𝒢(g,L,M,N) for ℓ^2(𝕊) using the Zak transform. In section 4, we provide an admissibility characterization for complete multi-window discrete Gabor systems and multi-window discrete Gabor frames 𝒢(g,L,M,N)
on a discrete periodic set 𝕊, and we finish with an example.
§ AUXILIARY LEMMAS
In this section, we present several lemmas and introduce the notations that will be utilized in the following sections. In addition to the notations introduced in the introduction, let ℳ_s,t denote the set of all s× t matrices with entries in ℂ. We use p∧ q to indicate that p and q are coprime. For a given matrix A, A^* is the conjugate transpose of A, N(A) represents its kernel, and A_s,t refers to its (s,t)-component. When A is a column matrix, we denote its r-component simply by A_r. Following this, we provide several definitions and results that will be useful throughout the rest of the paper.
<cit.>
Let K∈ℕ, and 𝕊 be a Kℤ-periodic set in ℤ. Write 𝕊_K=𝕊∩ℕ_K. Then the restriction of z_K to ℓ^2(𝕊) is a unitary linear operator from ℓ^2(𝕊) to the Hilbert space ℓ^2(Q) where Q=𝕊_K× [0,1[ and
ℓ^2(Q):={ψ:Q →ℂ: ∑_j∈𝕊_K∫_0^1 |ψ(j,θ)|^2 dθ< ∞}.
Let A,B⊂ℤ and K∈ℕ. We say that A is Kℤ-congruent to B if there exists a partition {A_k}_k∈ℤ of A such that {A_k+kK}_k∈ℤ is a partition of B.
<cit.>
Let N,M∈ℕ and p,q∈ℕ such that p∧ q=1 and N/M=p/q. Then
the set
Δ:={j+kM-rN: j∈ℕ_M/q, k∈ℕ_p, r∈ℕ_q} is qN-congruent to ℕ_pM.
For each f∈ℓ^2(ℤ), we associate a matrix-valued function Z_f:ℤ×ℝ→ℳ_q,p whose entry at the r-th row and the k-th column is defined by
Z_f(j,θ)_r,k=z_pMf(j+kM-rN,θ).
<cit.>
Let N,M∈ℕ and p,q∈ℕ such that p∧ q=1 and N/M=p/q. Then
z_pMf is completeley determined by the matrices Z_f(j,θ) for j∈ℕ_M/q and θ∈ [0,1[.
Conversely, a matrix-valued function Z:ℕ_M/q× [0,1[→ℳ_q,p such that for all j∈ℕ_M/q, Z(j,.)_r,k∈ L^2([0,1[) also determines a unique f∈ℓ^2(ℤ) such that for all j∈ℕ_M/q, θ∈ [0,1[, Z_f(j,θ)=Z(j,θ).
For g:={g_l}_l∈ℕ_L∈ℓ^2(ℤ), we associate the matrix-valued function Z_g:ℕ_N/M×ℝ→ℳ_qL,p defined for all j∈ℕ_M/q, θ∈ℝ by the block matrix:
Z_g(j,θ)=[ Z_g_0(j,θ); Z_g_1(j,θ); ⋮; Z_g_L-1(j,θ) ].
<cit.>
Let p,q∈ℕ such that p∧ q=1. Then for all j∈ℤ, there exists a unique (k_0,l_0)∈ℕ_p×ℤ and a unique (k_0,m_0,r_0)∈ℕ_p×ℤ×ℕ_q such that j= k_0q+l_0p =k_0q+(m_0q+r_0)p.
<cit.>
Let M, N ∈ℕ and p, q ∈ℕ such that N/M=p/q and p∧ q = 1. Then, for
all m ∈ℤ, there exists a unique (j, r, k, ℓ) ∈ℕ_M/q×ℕ_q ×ℕ_p ×ℤ such that m =
j+kM -rN +ℓ qN.
The following proposition characterizes which Multi-window Gabor frames are Multi-window Gabor Riesz beses using the parameters L,M and N.
<cit.>
Let g:={g_l}_l∈ℕ_L⊂ℓ^2(𝕊).
* 𝒢(g,L,M,N) is a frame for ℓ^2(𝕊) only when card(𝕊_N)/M≤ L.
* Assume that 𝒢(g,L,M,N) is a frame for ℓ^2(𝕊). Then following statements are equivalent:
* 𝒢(g,L,M,N) is a Riesz basis (exact frame) for ℓ^2(𝕊).
* card(𝕊_N)/M=L.
<cit.>
Let M ∈ℕ and E ⊂ℤ. Then the following conditions are equivalent:
* { e^2π im/M·χ_E(·) : m ∈ℕ_M } is a tight frame for ℓ^2(E) with frame bound M.
* { e^2π im/M·χ_E(·) : m ∈ℕ_M } is complete in ℓ^2(E).
* E is Mℤ-congruent to a subset of ℕ_M.
* ∑_k ∈ℤχ_E(· + kM) ≤ 1 on ℤ.
<cit.>
Let {f_i}_i∈ℐ, where ℐ is a countable sequence, be a Parseval frame for a separable Hilbert space H. Then the following statements are equivalent:
* {f_i}_i∈ℐ is a Riesz basis.
* {f_i}_i∈ℐ is an orthonormal basis.
* For all i∈ℐ, f_i=1.
Let f∈ℓ^2(ℤ). If f∈ℓ^2(𝕊), then for all j∈ℤ, a.e θ∈ℝ,
Z_f(j,θ)𝒦(j)=Z_f(j,θ).
Let s∈ℕ_q and t∈ℕ_p. We have:
[ (Z_f(j,θ)𝒦(j))_s,t = ∑_k=0^p-1Z_f(j,θ)_s,k𝒦(j)_k,t; = ∑_k=0^p-1Z_f(j,θ)_s,kδ_k,tχ_𝒦_j(t); = Z_f(j,θ)_s,tχ_𝒦_j(t); = {[ Z_f(j,θ)_s,t if t∈𝒦_j,; 0 otherwise. ]. ]
On the other hand, we have Z_f(j,θ)_s,t=z_pMf(j+tM-sN,θ)=∑_k∈ℤf(j+tM-sN+kpM)e^2π i kθ)=∑_k∈ℤf(j+tM-sN+kqN)e^2π i kθ since pM=qN. Then, if t∉𝒦_j, j+tM∉𝕊, then, for all k∈ℤ, j+tM-sN+kqN∉𝕊 by the Nℤ-periodicity of 𝕊, thus f(j+tM-sN+kqN)=0 for all k∈ℤ. Hence Z_f(j,θ)_s,t=0 if t∉𝒦_j. The proof is completed.
For all j∈ℤ, 𝒦(j) is an orthogonal projection on ℂ^p. i.e.
* 𝒦(j)^2=𝒦(j).
* 𝒦(j)^*=𝒦(j).
Let j∈ℤ. We have:
[ 𝒦(j)^2 = diag(χ_𝒦_j(0)^2,χ_𝒦_j(1)^2,…,χ_𝒦_j(p-1)^2); = diag(χ_𝒦_j(0),χ_𝒦_j(1),…, χ_𝒦_j(p-1))=𝒦(j). ]
And
[ 𝒦(j)^* = diag(χ_𝒦_j(0),χ_𝒦_j(1),…,χ_𝒦_j(p-1)); = diag(χ_𝒦_j(0),χ_𝒦_j(1),…, χ_𝒦_j(p-1))=𝒦(j). ]
§ CHARACTERIZATIONS OF COMPLETE MULTIWINDOW DISCRETE GABOR SYSTEMS AND MULTIWINDOW DISCRETE GABOR FRAMES
In this section we use all the notations already introduced Without introducing them again. Let L,M,N∈ℕ and p,q∈ℕ such that p∧ q=1 and N/M=p/q and denote 𝕊_N=𝕊∩ℕ_N. We characterise what g=:{g_l}_l∈ℕ_L⊂ℓ^2(𝕊) generates a complete Gabor system and a Gabor frame 𝒢(g,L,M,N).
We first present the following proposition:
Let g:={g_l}_l∈ℕ_L⊂ℓ^2(𝕊). Let M,N∈ℕ and p,q∈ℕ such that p∧ q=1. Then the integer-valued function (j,θ)→ rank(Z_g(j,θ)) is M/q-periodic with respect to j and 1-periodic with respect to θ. Moreover, for all j∈ℤ and a.e θ∈ℝ, we have:
rank(Z_g(j,θ))≤ card(𝒦_j).
For the proof, we need the following lemma:
For all j∈ℤ, a.e θ∈ℝ, k'∈ℕ_p and r'∈ℕ_q, we have:
rank(Z_g(j,θ))=rank(Z_g(j+k'M+r'N,θ)).
Let j∈ℤ, θ∈ℝ. Denote by C_k(j,θ) the k-th column of Z_g(j,θ).
We have for all l∈ℕ_L and for all r∈ℕ_q, z_pMg_l((j+k'M)+kM-rN,θ)=z_pMg_l(j+(k+k_0)M-rN,θ).
If 0≤ k≤ p-k'-1, then C_k(j+k_0M,θ)=C_k+k_0(j,θ). Otherwise (p-k'≤ k≤ p-1), we have by quasi-periodicity of the Zak transform z_pMg_l((j+k'M)+kM-rN,θ)=e^-2π i θ.z_pMg_l(j+(k+k'-p)M-rN,θ). Then C_k(j+k'M,θ)=e^-2π iθ.C_k+k'-p(j,θ).
Consider the map:
ϕ:[ ℕ_p → ℕ_p; k ↦ {[ k+k' if 0≤ k≤ p-k'-1,; k+k'-p if p-k'≤ k≤ p-1. ]. ]
We show, easily, that ϕ is injective. It is, then, bijective. Hence:
[ rank(Z_g(j,θ)) = rank{C_k(j,θ): k∈ℕ_p}; = rank{C_k(j+k'M,θ): k∈ℕ_p}; = rank(Z_g(j+k'M,θ)). ]
Denote by R_r(j,θ) the r-th row of Z_g(j,θ). Then there exists a unique (l,r_0)∈ℕ_L×ℕ_q such that r=lq+r_0. Then (∀ j_0∈ℤ) R_r(j_0,θ) is the r_0-th row of Z_g_l(j_0,θ).
We have z_pMg_l((j+r'N)+kM-r_0N,θ)=z_pMg_l(j+kM-(r_0-r')N,θ).
If r'≤ r_0≤ q-1, then R_r(j+r'N,θ) is the (r_0-r')-th row of Z_g_l(j,θ), thus R_r(j+r'N,θ)=R_r-r'(j,θ). Otherwise (0≤ r_0≤ r'-1), since pM=qN and by quasi-periodicity of the Zak transform, we have: z_pMg_l((j+r'N)+kM-r_0N,θ)=z_pMg_l(j+kM-(r_0-r'+q)N,θ). Then R_r(j+r'N,θ) is the (r_0-r'+q)-th row of Z_g_l(j,θ), thus R_r(j+r'N,θ)=R_r-r'+q(j,θ).
Consider the map:
ψ:[ ℕ_qL → ℕ_qL; r ↦ {[ r-r' if lq+r'≤ r≤ (l+1)q-1,; r-r'+q if lq≤ r≤ lq+r'-1. ]. ]
It is easy to show that ψ is injective. Then it is bijective. Hence:
[ rank(Z_g(j,θ)) = rank{R_r(j,θ): r∈ℕ_qL}; = rank{R_r(j+r'N,θ): r∈ℕ_qL}; = rank(Z_g(j+r'N,θ)). ]
Hence For all j∈ℤ, θ∈ℝ, k'∈ℕ_p and r'∈ℕ_q, we have:
rank(Z_g(j,θ))=rank(Z_g(j+k'M+r'N,θ)).
Let j∈ℤ, θ∈ℝ. Given an arbitrary s∈ℤ. By lemma <ref>, there exists a unique (k_0,m_0,r_0)∈ℕ_p×ℤ×ℕ_q such that s=k_0q+(m_0q+r_0)p. Then:
[ Z_g(j+M/qs,θ) = Z_g(j+k_0M+m_0pM+r_0N,θ); = e^2π i m_0θ.Z_g(j+k_0M+r_0N,θ); = e^2π i m_0θ.Z_g(j,θ) lemma <ref>. ]
Hence:
rank(Z_g(j+M/qs,θ))=rank(Z_g(j,θ)).
The 1-periodicity with respect to θ is simply due to the 1-periodicity of the Zak transform with respect to θ.
On the other hand, if k∉𝒦_j, i.e. k is such that j+kM∉𝕊, then, by Nℤ-periodicity of 𝕊, for all r∈ℤ, j+kM-rN∉𝕊, and thus for all l∈ℕ_L and for all r∈ℕ_q, z_pMg_l(j+kM-rN,θ)=0 since pM=qN. Then the k-th column of Z_g(j,θ) is identically zero. Hence rank(Z_g(j,θ))≤ card(𝒦_j).
Let j∈ℤ. Since {𝕊_N+nN}_n∈ℤ is a partition of 𝕊, we have:
[ card(𝒦_j) = ∑_k=0^p-1χ_𝕊(j+kM); = ∑_k=0^p-1∑_n∈ℤχ_𝕊_N(j+nN+kM); = ∑_k=0^p-1∑_n∈ℤχ_𝕊_N(j+np+kq/qM) since N=pM/q; = ∑_n∈ℤχ_𝕊_N(j+M/qn) lemma <ref>. ]
Hence card(𝒦_j) is M/q-periodic. Then the inequality (1) holds for all j∈ℤ and a.e θ∈ℝ if and only if it holds for all j∈ℕ_M/q and a.e θ∈ [0,1[.
The following lemma is very useful for the rest.
Let g:={g_l}_l∈ℕ_L⊂ℓ^2(ℤ). Let f∈ℓ^2(ℤ). Then the following statements are equivalent:
* f is orthogonal to 𝒢(g,L,M,N).
* For all j∈ℕ_M, a.e. θ∈ [0,1[, Z_g(j,θ)F(j,θ)=0.
Where F(j,θ):=(z_pMf(j+kM,θ))_k∈ℕ_p^t for j∈ℕ_M and a.e θ∈ [0,1[.
We have:
f is orthogonal to 𝒢(g,L,M,N) ⟺ f is orthogonal to 𝒢(g_l,M,N) for all l∈ℕ_L.
And we have:
Z_g(j,θ)F(j,θ)=0 ⟺ Z_g_l(j,θ)F(j,θ)=0 for all l∈ℕ_L.
These equivalences together with lemma 3.1 in <cit.> complete the proof.
The following proposition characterizes complete multi-window Gabor systems on 𝕊.
Let g:={g_l}_l∈ℕ_L⊂ℓ^2(𝕊). Then the following statements are equivalent:
* 𝒢(g,L,M,N) is complete in ℓ^2(𝕊).
* For all j∈ℕ_M/q, a.e θ∈ [0,1[, rank(Z_g(j,θ))=card(𝒦_j).
* For all j∈ℤ, a.e θ∈ℝ, rank(Z_g(j,θ))=card(𝒦_j).
By proposition <ref>, rank(Z_g(j,θ)) is M/q-periodic with repect to j and 1-periodic with respect to θ. And by remark <ref>, card(𝒦_j) is M/q-periodic. Hence (2)⟺ (3).
We will use the notations in lemma <ref>. It is obvious that for f∈ℓ^2(ℤ), F(j,θ)=0 for all j∈ℕ_M and a.e θ∈ [0,1[ if and only if f=0. Then by lemma <ref>, (1) is equivalent to the fact that: for f∈ℓ^2(𝕊),
(∀ j∈ℕ_M, a.e θ∈ [0,1[, Z_g(j,θ)F(j,θ)=0) ⟹ (∀ j∈ℕ_M, a.e θ∈ [0,1[, F(j,θ)=0).
(1)⟹(3) Assume that 𝒢(g,L,M,N) is complete in ℓ^2(𝕊) and suppose, by contradiction, that (3) fails. Then by proposition <ref>, there exist j_0∈ℕ_M and E_0⊂ [0,1[ with positive measure such that for all θ∈ E_0:
rank(Z_g(j_0,θ))< card(𝒦_j_0).
For a.e θ∈ [0,1[, denote ℙ(j_0,θ):ℂ^p→ℂ^p the orthogonal projection onto the kernel of Z_g(j_0,θ). Let {e_k}_k∈ℕ_p be the standard orthonormal basis of ℂ^p.
Suppose that F=span{e_k: k∈𝒦_j}⊂ N(ℙ(j_0,θ)) for a.e θ∈ [0,1[. Then F⊕ N(Z_g(j_0,θ)) is an orthogonal sum. Thus:
[ p ⩾ dim(F⊕ N(Z_g(j_0,θ)) ); = dim F+dim (N(Z_g(j_0,θ)) ); = card(𝒦_j)+(p-rank(Z_g(j_0,θ)) ). ]
Hence rank(Z_g(j_0,θ) )⩾ card(𝒦_j). Contradiction. Then there exist k_0∈𝒦_j_0 and E_0'⊂ [0,1[ with positive measure such that e_k_0∉ N(ℙ(j_0,θ) ) for a.e θ∈ E_0'. i.e. ℙ(j_0,θ)e_k_0≠ 0 for a.e θ∈ E_0'. Define for all j∈ℕ_M, a.e θ∈ [0,1[, F(j,θ)=δ_j,j_0.ℙ(j_0,θ)e_k_0. Observe that if k∈ℕ_p-𝒦_j_0, then e_k∈ N(Z_g(j_0,θ) ) for a.e θ∈ [0,1[. Then for all k∈ℕ_p-𝒦_j_0, a.e θ∈ [0,1[, ℙ(j_0,θ)e_k=e_k. Thus for k∈ℕ_p-𝒦_j, F(j_0,θ)_k=⟨ F(j_0,θ),e_k⟩=⟨ℙ(j_0,θ)e_k_0,e_k⟩=⟨ e_k_0,ℙ(j_0,θ) e_k⟩=⟨ e_k_0,e_k⟩=0.
Define f∈ℓ^2(ℤ) by z_pMf(j+kM,θ)=F(j,θ)_k for all j∈ℕ_M, a.e θ∈ [0,1[. Then by lemma <ref>, f∈ℓ^2(𝕊). Since F(j_0,θ)=ℙ(j_0,θ)e_k_0≠ 0 for all θ∈ E_0' which is with positive measure, then f≠ 0.
On the other hand, we have Z_g(j,θ)F(j,θ)=0 for all j∈ℕ_M and a.e θ∈ [0,1[. In fact, if j≠ j_0, then, by definition of F, for a.e θ∈ [0,1[, F(j,θ)=0, hence Z_g(j,θ)F(j,θ)=0. Otherwise, F(j_0,θ)=ℙ(j_0,θ)e_k_0, then Z_g(j_0,θ)F(j_0,θ)=Z_g(j_0,θ)ℙ(j_0,θ)e_k_0=0 since ℙ(j_0,θ)e_k_0∈ N(Z_g(j_0,θ) ). Then, by lemma <ref>, f is orthogonal to 𝒢(g,L,M,N) but f≠ 0. Contradiction with (1).
(2)⟹ (1) Assume (3) and let f∈ℓ^2(𝕊) such that for all j∈ℕ_M and a.e θ∈ [0,1[:
Z_g(j,θ)F(j,θ)=0.
Let's prove that F(j,θ)=0 for all j∈ℕ_M, a.e θ∈ [0,1[. Let j∈ℕ_M. If 𝒦_j=∅, then by the definition of z_pMf and since pM=qN, F(j,θ)=0 for a.e θ∈ [0,1[. Otherwise, i.e. 𝒦_j≠∅. Let k∈ℕ_p-𝒦_j, then, by the definition of the Zak transform and since pM=qN, the k-th column of Z_g(j,θ) is identically zero and we also have z_pMf(j+kM,θ)=0 for a.e θ∈ [0,1[. From (3), the submatrix of Z_g(j,θ) of size qL× card(𝒦_j) obtained by removing all the columns with indices not in 𝒦_j has the same rank than Z_g(j,θ) which is card(𝒦_j). Then this submatrix is injective, thus, by equality (2), z_pMf(j+kM,θ)=0 for a.e θ∈ [0,1[. Then for all k∈ℕ_p, z_pMf(j+kM,θ)=0 for a.e θ∈ [0,1[. Hence F(j,θ)=0 for a.e θ∈ [0,1[. Hence 𝒢(g,L,M,N) is complete in ℓ^2(𝕊).
Now we characterize multi-window Gabor frames for ℓ^2(𝕊) using the Zak transform.
Given g:={g_l}_l∈ℕ_L⊂ℓ^2(𝕊). Then the following statements are equivalent:
* 𝒢(g,L, M,N) is a frame for ℓ^2(S)with frame bounds0 < A ≤ B .
*
A/M.𝒦(j)
≤∑_l∈ℕ_L Z_g_l^*(j, θ)Z_g_l(j, θ) ≤B/M.𝒦(j).
for allj ∈M/q and a.eθ∈[0, 1[.
* The inequality(3)holds for allj ∈ℕ_Mand a.eθ∈[0, 1[.
For the proof, we will need the following lemma.
Denote L^∞(𝕊_pM× [0,1[) the set of functions F on 𝕊_pM× [0,1[ such that for all j∈𝕊_pM, F(j,.)∈ L^∞([0,1[), and Δ := z_pM |ℓ^2(𝕊)^-1(L^∞(𝕊_pM× [0,1[)). Let g:={g_l}_l∈ℕ_L⊂ℓ^2(𝕊). Then the following statements are equivalent:
* 𝒢(g,L,M,N) is a frame for ℓ^2(𝕊) with frame bounds A≤ B.
* For all f ∈Δ, we have:
A/M∑_j=0^M-1∫_0^1F(j, θ)^2 dθ≤∑_l=0^l-1∑_j=0^M-1∫_0^1 Z_g_l(j, θ) F(j, θ)^2 dθ≤B/M∑_j=0^M-1∫_0^1F(j, θ)^2 dθ.
Where F(j,θ), for all j∈ℕ_M and a.e θ∈ [0,1[, is as defined in lemma <ref>.
By density of L^∞(𝕊_pM× [0,1[) in L^2(𝕊_pM× [0,1[) and by the unitarity of z_pM from ℓ^2(𝕊) onto L^2(𝕊_pM× [0,1[) (Since pM=qN, then 𝕊 is qNℤ-periodic in ℤ), Δ is dense in ℓ^2(𝕊). Hence 𝒢(g,L,M,N) is a frame for ℓ^2(𝕊) with frame bounds A≤ B if and only if for all f∈Δ, Af^2≤∑_l=0^L-1∑_n∈ℤ∑_m=0^M-1|⟨ f, e^2π i m/M.g_l(.-nN)⟩|^2≤ Bf^2.
Let f∈Δ, it is clear that for all r∈ℕ_q, Z_g_l(j,.)F(j,.)_r∈ L^2([0,1[).
We have: [ ∑_l=0^L-1∑_n∈ℤ∑_m=0^M-1|⟨ f, e^2π i m/M.g_l(.-nN)⟩|^2; = ∑_l=0^L-1∑_r=0^q-1∑_n∈ℤ∑_m=0^M-1|⟨ f, e^2π i m/M.g_l(.-(r+nq)N)⟩|^2; = ∑_l=0^L-1∑_r=0^q-1∑_n∈ℤ∑_m=0^M-1|⟨ z_pMf, z_pM(e^2π i m/M.g_l(.-(r+nq)N))⟩|^2 by unitarity of z_pM; = ∑_l=0^L-1∑_r=0^q-1∑_n∈ℤ∑_m=0^M-1|⟨ z_pMf, z_pM(e^2π i m/M.g_l(.-rN-npM))⟩|^2 since qN=pM. ]
By a simple calculation, we obtain that for all j∈ℤ and a.e θ∈ℝ:
z_pM(e^2π i m/M.g_l(.-rN-npM))(j,θ)=z_pMg_l(j-rN,θ) e^2π i nθ e^2π i m/Mj.
Then: [ ⟨ z_pMf, z_pM(e^2π i m/M.g_l(.-rN-npM))⟩; = ∑_j=0^pM-1∫_0^1 z_pMf(j,θ) z_pMg_l(j-rN,θ)e^-2π i nθ dθ. e^-2π i m/Mj; = ∑_j=0^M-1∑_k=0^p-1∫_0^1 z_pMf(j+kM,θ) z_pMg_l(j+kM-rN,θ)e^-2π i nθ dθ. e^-2π i m/Mj; = ∑_j=0^M-1∫_0^1 Z_g_l(j,θ)F(j,θ)_r e^-2π i nθdθ. e^-2π im/Mj; = ∑_j=0^M-1T(j). e^-2π im/Mj where T(j)=∫_0^1 Z_g_l(j,θ)F(j,θ)_r e^-2π i nθdθ; ]
Observe that T is M-periodic. Since {1/√(M)e^2π i m/M.}_m∈ℕ_M is an orthonormal basis for ℓ^2(ℕ_M); the space of M-periodic sequences, then we have:[ ∑_m=0^M-1|⟨ z_pMf, z_pM(e^2π i m/M.g_l(.-rN-npM))⟩|^2; = ∑_m=0^M-1|⟨ T, e^2π i m/M.⟩|^2; = MT^2; = M∑_j=0^M-1|∫_0^1 Z_g_l(j,θ)F(j,θ)_r e^-2π i nθdθ|^2.; ]
Since {e^2π i nθ}_n∈ℤ is an orthonormal basis for L^2([0,1[), then we have:
[ ∑_n∈ℤ∑_m=0^M-1|⟨ z_pMf, z_pM(e^2π i m/M.g_l(.-rN-npM))⟩|^2; = M∑_j=0^M-1∑_n∈ℤ|∫_0^1 Z_g_l(j,θ)F(j,θ)_r e^-2π i nθdθ|^2.; = M∑_j=0^M-1∑_n∈ℤ|⟨Z_g_l(j,.)F(j,.)_r,e^2π i n.⟩|^2.; = M∑_j=0^M-1Z_g_l(j,.)F(j,.)_r^2; = M∑_j=0^M-1∫_0^1 | Z_g_l(j,θ)F(j,θ)_r|^2 dθ; ]
Hence: [ ∑_r=0^q-1∑_n∈ℤ∑_m=0^M-1|⟨ z_pMf, z_pM(e^2π i m/M.g_l(.-rN-npM))⟩|^2; = M∑_j=0^M-1∫_0^1 ∑_r=0^q-1| Z_g_l(j,θ)F(j,θ)_r|^2 dθ; = M∑_j=0^M-1∫_0^1 Z_g_l(j,θ)F(j,θ)^2 dθ.; ]
The norm in the last line is the 2-norm in ℂ^q. Thus: ∑_r=0^q-1∑_n∈ℤ∑_m=0^M-1|⟨ z_pMf, z_pM(e^2π i m/M.g_l(.-rN-npM))⟩|^2=M∑_l=0^L-1∑_j=0^M-1∫_0^1 Z_g_l(j,θ)F(j,θ)^2 dθ.
Hence:
∑_l=0^L-1∑_n∈ℤ∑_m=0^M-1|⟨ f, e^2π i m/M.g_l(.-nN)⟩|^2=M∑_l=0^L-1∑_j=0^M-1∫_0^1 Z_g_l(j,θ)F(j,θ)^2 dθ.
On the other hand, we have by unitarity of z_pM:[ f^2 = z_pMf^2; = ∑_j=0^pM-1∫_0^1| z_pMf(j,θ)|^2 dθ; = ∑_j=0^M-1∑_k=0^p-1∫_0^1 | z_pM(j+kM,θ)|^2 dθ; = ∑_j=0^M-1∫_0^1 ∑_k=0^p-1| F(j,θ)_k|^2 dθ; = ∑_j=0^M-1∫_0^1 F(j,θ)^2 dθ; ]
The norm in the last line is the 2-norm in ℂ^p. Thus:
f^2=∑_j=0^M-1∫_0^1 F(j,θ)^2 dθ.
Then, combining (5) and (6), the proof is completed.
(1)⟹ (3) Assume that 𝒢(g,L,M,N) is a frame for ℓ^2(𝕊). Then for all f∈Δ,
A/M∑_j=0^M-1∫_0^1F(j, θ)^2 dθ≤∑_l=0^l-1∑_j=0^M-1∫_0^1 Z_g_l(j, θ) F(j, θ)^2 dθ≤B/M∑_j=0^M-1∫_0^1F(j, θ)^2 dθ.
Fix x:={x_k}_k∈ℕ_p∈ℂ^p, j_0∈ℕ_M and h∈ L^∞([0,1[), and define for all j∈ℕ_M and a.e θ∈ [0,1[, F(j,θ):={δ_j,j_0 χ_𝒦_j(k) x_k h(θ) }_k∈ℕ_p.
Then A/M∑_j=0^M-1∫_0^1F(j, θ)^2 dθ=A/M𝒦(j_0)x^2∫_0^1 | f(θ)|^2 dθ=A/M⟨𝒦(j_0)x,x⟩∫_0^1 | h(θ)|^2 dθ (lemma <ref>). On the other hand, we have: [ ∑_l=0^l-1∑_j=0^M-1∫_0^1 Z_g_l(j, θ) F(j, θ)^2 dθ = ∑_l=0^l-1∫_0^1 Z_g_l(j_0, θ) F(j_0, θ)^2 dθ; = ∑_l=0^l-1∫_0^1∑_r=0^q-1|(Z_g_l(j_0, θ) F(j_0, θ))_r |^2 dθ; = ∑_l=0^l-1∫_0^1∑_r=0^q-1|∑_k=0^p-1 Z_g_l(j_0, θ)_r,k F(j_0, θ)_k |^2 dθ; = ∑_l=0^l-1∫_0^1∑_r=0^q-1|∑_k=0^p-1 Z_g_l(j_0, θ)_r,k χ_𝒦_j_0(k) x_k h(θ) |^2 dθ; = ∑_l=0^l-1∫_0^1∑_r=0^q-1|∑_k=0^p-1 Z_g_l(j_0, θ)_r,k (𝒦(j_0)x)_k |^2 | h(θ)|^2 dθ; = ∑_l=0^l-1∫_0^1∑_r=0^q-1|(Z_g_l(j_0, θ)𝒦(j_0)x)_r |^2 | h(θ)|^2 dθ; = ∑_l=0^l-1∫_0^1Z_g_l(j_0,θ)𝒦(j_0)x ^2 | h(θ)|^2 dθ; = ∑_l=0^l-1∫_0^1Z_g_l(j_0,θ)x ^2 | h(θ)|^2 dθ by lemma <ref>; = ∫_0^1⟨∑_l=0^l-1 Z_g_l(j_0,θ)^*Z_g_l(j_0,θ)x,x⟩ | h(θ)|^2 dθ.; ]
Then for all j∈ℕ_M, x∈ℂ^p and f∈ L^∞([0,1[), we have:
A/M.⟨𝒦(j)x,x⟩ ∫_0^1| h(θ)|^2 dθ≤∫_0^1⟨∑_l=0^l-1 Z_g_l(j,θ)^*Z_g_l(j,θ)x,x⟩ | h(θ)|^2 dθ≤B/M.⟨𝒦(j)x,x⟩∫_0^1| h(θ)|^2 dθ.
For j∈ℕ_M and x∈ℂ^p fixed, denote C=A/M.⟨𝒦(j)x,x⟩ and D=B/M.⟨𝒦(j)x,x⟩. Assume, by contradiction, that
C>⟨∑_l=0^L-1Z_g_l(j,θ)^*Z_g_l(j,θ)x,x⟩ ,
on a subset of [0,1[ with a positive measure. Denote
D={θ∈ [0,1[: (8) holds}.
For all k∈ℕ, denote D_k:={θ∈ [0,1[: C-C/k≤⟨∑_l=0^L-1Z_g_l(j,θ)^*Z_g_l(j,θ)x,x⟩≤ C-C/k-1}.
It is clear that {D_k}_k∈ℕ forms a partition for D. Since mes(D)>0, then there exists k∈ℕ such that mes(D_k)>0. Let h:=χ_D_k, we have: [ ∫_0^1⟨∑_l=0^l-1 Z_g_l(j,θ)^*Z_g_l(j,θ)x,x⟩ | f(θ)|^2 dθ = ∫_D_k⟨∑_l=0^l-1 Z_g_l(j,θ)^*Z_g_l(j,θ)x,x⟩ dθ; ≤ (C-C/K. mes(D_k); < C.mes(D_k); = C∫_0^1 | h(θ)|^2 dθ Contradiction with (7). ]
Suppose, again by contradiction, that
D<⟨∑_l=0^L-1Z_g_l(j,θ)^*Z_g_l(j,θ)x,x⟩,
on a subset of [0,1[ with a positive measure. Denote
D'={θ∈ [0,1[: (9) holds}.
For all k∈ℕ, m∈ℕ, denote D_k,m':={θ∈ [0,1[: D(k+1/m+1)≤⟨∑_l=0^L-1Z_g_l(j,θ)^*Z_g_l(j,θ)x,x⟩≤ D(k+1/m)}.
It is clear that {D_k,m'}_k∈ℕ forms a partition for D'. Since mes(D')>0, then there exist k∈ℕ and m∈ℕ such that mes(D_k,m')>0. Let h:=χ_D_k,m', we have: [ ∫_0^1⟨∑_l=0^l-1 Z_g_l(j,θ)^*Z_g_l(j,θ)x,x⟩ | h(θ)|^2 dθ = ∫_D_k,m'⟨∑_l=0^l-1 Z_g_l(j,θ)^*Z_g_l(j,θ)x,x⟩ dθ; ⩾ D(k+1/m+1). mes(D_k,m'); > D.mes(D_k,m'); = D∫_0^1 | h(θ)|^2 dθ Contradiction with (7). ]
Hence for all j∈ℕ_M, and a.e θ∈ [0,1[, we have:
A/M.𝒦(j)
≤∑_l∈ℕ_L Z_g_l^*(j, θ)Z_g_l(j, θ) ≤B/M.𝒦(j).
(3)⟹(1) Assume (3). For f∈Δ, j∈ℕ_M and a.e θ∈ [0,1[, the k-th component of F(j,θ) is zero if k∉𝒦_j. Then 𝒦(j)F(j,θ)=F(j,θ). Then:
A/MF(j,θ)^2≤⟨∑_l=0^L-1 Z_g_l(j,θ)^*Z_g_l(j,θ)F(j,θ),F(j,θ)⟩≤B/MF(j,θ)^2.
Hence:
A/M∑_j=0^M-1∫_0^1F(j,θ)^2≤∑_l=0^L-1∑_j=0^M-1∫_0^1 Z_g_l(j,θ)F(j,θ)^2≤B/M∑_j=0^M-1∫_0^1F(j,θ)^2.
Then lemma <ref> implies (1).
(3)⟹ (2) Since ℕ_M/q⊂ℕ_M.
(2)⟹ (3) Assume (2). Then the inequality (3) holds for all j ∈ℕ_M/q and a.e. θ∈ [0, 1[. Let's prove that it holds for all j ∈ℕ_M and a.e. θ∈ [0, 1[. Let j ∈ℕ_M. By Lemma <ref>, there exists a unique (j', r', k', ℓ') ∈ℕ_M/q×ℕ_q ×ℕ_p ×ℤ such that j = j' + k'M - r'N + ℓ'qN. Then, by the
quasiperiodicity of the discrete Zak transform, we have, for all l∈ℕ_L, after a simple calculation:
(Z_g_l(j, θ)^*Z_g_l(j, θ))_k_1, k_2 =
∑_r=0^q-1
(Z_qN g_l) (j' + (k_1 + k')M - (r + r')N, θ) (Z_qNg) (j' + (k_2 + k')M - (r + r')N, θ)
=
(Z_g_l(j', θ)^*Z_g_l(j', θ))_k_1+k', k_2+k', if k_1 + k' < p, k_2 + k' < p
e^-2π iθ (Z_g_l(j', θ)^*Z_g_l(j', θ))_k_1+k', k_2+k'-p, if k_1 + k' < p, k_2 + k' ≥ p
e^2π iθ (Z_g_l(j', θ)^*Z_g_l(j', θ))_k_1+k'-p, k_2+k', if k_1 + k' ≥ p, k_2 + k' < p
(Z_g_l(j', θ)^*Z_g_l(j', θ))_k_1+k'-p, k_2+k'-p, if k_1 + k' ≥ p, k_2 + k' ≥ p.
for k_1, k_2 ∈ℕ_p and a.e. θ∈ [0, 1[. Define V: ℂ^p →ℂ^p by Vx = y = (y_0, y_1, …, y_p-1)^t:
y_k =
e^-2π i θx_k-k'+p, if 0 ≤ k < k'
x_k-k', if k' ≤ k < p
for x ∈ℂ^p. Then V is a unitary operator, and
⟨ Z_g_l(j, θ)^*Z_g_l(j, θ)x, x ⟩ = ⟨ Z_g_l(j', θ)^*Z_g_l(j', θ)Vx, Vx ⟩,
for a.e. θ∈ [0, 1[ and all x ∈ℂ^p. Then, by (2), we have:
A/M⟨ V^*𝒦(j') Vx, x ⟩≤⟨∑_l=0^L-1Z_g_l(j, θ)^*Z_g_l(j, θ)x, x ⟩≤B/M⟨ V^*𝒦(j') Vx, x ⟩,
for a.e. θ∈ [0, 1) and each x ∈ℂ^p. When k+k' < p, k+k' ∈𝒦_j' if and only if j'+(k+k')M ∈𝕊, equivalently, j' + k'M - r'N + ℓ'qN + kM ∈𝕊, i.e. j + kM ∈𝕊. Therefore, k + k' ∈𝒦_j if and only if k ∈𝒦_j when k + k' < p. Similarly, k + k' - p ∈𝒦_j' if and only if k ∈𝒦_j when
k + k' ≥ p. It follows that:
V^*𝒦(j')V = 𝒦(j).
Which together with (10) gives (3). The proof is completed.
In the case of 𝕊=ℤ. For all j∈ℕ_M/q, 𝒦_j=ℕ_p. Then the condition (2) in the proposition <ref> is equivalent to rank(Z_g(j,θ) )=p for all j∈ℕ_M/q and a.e θ∈ [0,1[. And the condition (2) in the proposition <ref> is equivalent to: For all j∈ℕ_M/q and a.e θ∈ [0,1[,
A/M.I_p,p≤∑_l∈ℕ_L Z_g_l^*(j, θ)Z_g_l(j, θ) ≤B/M.I_p,p.
Where I_p,p is the identity matrix in ℳ_p,p.
§ ADMISSIBILITY CONDITIONS FOR A COMPLETE MULTIWINDOW GABOR SYSTEM AND A MULTIWINDOW GABOR FRAME
Note that, in what follows, we will use all the notations already introduced Without introducing them again. In this section, we study conditions for a periodic set𝕊to admit a complete multi-window Gabor system, and a multi-window Gabor frame. LetL,M,N∈ℕandp,q∈ℕsuch thatN/M = p/qandp∧q=1.
In what follows, we give some useful lemmas for the rest.
* card(𝒦_j)≤ qL for all j∈ℕ_M/q⟹card(𝕊_N)≤ LM.
* Assume that card(𝒦_j)≤ qL for all j∈ℕ_M/q. Then:
card(𝒦_j)=qL for all j∈ℕ_M/q ⟺ card(𝕊_N)= LM.
* Assume that card(𝒦_j)≤ qL for all j∈ℕ_M/q. We have:
[ card(𝕊_N) = ∑_j∈ℤχ_𝕊_N(j); = ∑_j∈ℕ_M/q∑_n∈ℤχ_𝕊_N(j+M/qn); = ∑_j∈ℕ_M/qcard(𝒦_j) by remark <ref>; ≤ M/q.qL=LM. ]
* Assume that card(𝒦_j)≤ qL for all j∈ℕ_M/q.
Assume that card(𝒦_j)=qL for all j∈ℕ_M/q. Then by the proof of (1), we have: [ card(𝕊_N) = ∑_j∈ℕ_M/qcard(𝒦_j); = M/q.qL=LM. ]
Conversely, assume that card(𝕊_N)= LM. Again by the proof of (1), we have ∑_j∈ℕ_M/qcard(𝒦_j)=card(𝕊_N)=LM=M/q.qL. Since
card(𝒦_j)≤ qL for all j∈ℕ_M/q, then card(𝒦_j)= qL for all j∈ℕ_M/q.
Let L,M∈ℕ and E_0, E_1, …, E_L-1⊂ℤ be mutually disjoint. Denote E=⋃_l∈ℕ_LE_l. Then the following statements are equivalent:
* {e^2π i m/M.χ_E_l}_m∈ℕ_M, l∈ℕ_L is complete in ℓ^2(E).
* For all l∈ℕ_L, {e^2π i m/M.χ_E_l}_m∈ℕ_M is complete in ℓ^2(E_l).
(1)⟹ (2) Assume (1). Fix l_0∈ℕ_L and let f∈ℓ^2(E_l_0) be orthogonal to {e^2π i m/M.χ_E_l_0}_m∈ℕ_M.
Define f∈ℓ^2(E) by f(j)=f(j) if j∈ E_l_0 and 0 otherwise. It is clear that if l≠ l_0, f is orthogonal to {e^2π i m/M.χ_E_l}_m∈ℕ_M. And we have:
[ ⟨f,e^2π i m/M.χ_E_l_0⟩ = ∑_j∈ Ef(j)e^-2π i m/Mjχ_E_l_0(j); = ∑_j∈ E_l_0f(j)e^-2π i m/Mjχ_E_l_0(j); = 0 since f is orthogonal to {e^2π i m/M.χ_E_l_0}_m∈ℕ_M. ]
Then f is orthogonal to {e^2π i m/M.χ_E_l}_m∈ℕ_M, l∈ℕ_L which is complete in ℓ^2(E), thus f=0 on E, and then f=0 on E_l.
(2)⟹ (1) Assume (3) and let h∈ℓ^2(E) be orthogonal to {e^2π i m/M.χ_E_l}_m∈ℕ_M, l∈ℕ_L. For all l∈ℕ_L, define h_l∈ℓ^2(E_l) as the restriction of h on E_l, i.e. h_l:=h|_E_l. Let l∈ℕ_L. Fix l∈ℕ_L. Since h is orthogonal to {e^2π i m/M.χ_E_l}_m∈ℕ_M, then ∑_j∈ Eh(j) e^-2π i m/M.χ_E_l(j)=0, then ∑_j∈ E_lh(j) e^-2π i m/M.χ_E_l(j)=0, thus ∑_j∈ E_lh_l(j) e^-2π i m/M.χ_E_l(j)=⟨ h_l,e^2π i m/M.χ_E_l⟩ =0. Hence h_l is orthogonal to {e^2π i m/M.χ_E_l}_m∈ℕ_M which is complete in ℓ^2(E_l). Hence h_l=0 on E_l. This for all l∈ℕ_L, therfore h=0 on E. Hence {e^2π i m/M.χ_E_l}_m∈ℕ_M,l∈ℕ_L is complete in ℓ^2(E).
Let L,M∈ℕ and E_0, E_1, …, E_L-1⊂ℤ be mutually disjoint. Denote E=⋃_l∈ℕ_LE_l. Then the following statements are equivalent:
* {e^2π i m/M.χ_E_l}_m∈ℕ_M, l∈ℕ_L is a tight frame for ℓ^2(E) with frame bound M.
* For all l∈ℕ_L, {e^2π i m/M.χ_E_l}_m∈ℕ_M is a tight frame for ℓ^2(E_l) with frame bound M.
(1)⟹ (2) Assume (1). Fix l_0∈ℕ_L and let f∈ℓ^2(E_l_0). Define f∈ℓ^2(E) by f(j)=f(j) if j∈ E_l_0 and 0 otherwise. It is clear that ⟨f,e^2π i m/M.χ_E_l⟩=0 if l≠ l_0 and that f=f. Together with the fact that {e^2π i m/M.χ_E_l}_m∈ℕ_M, l∈ℕ_L is a tight frame for ℓ^2(E) with frame bound M, we have: [ Mf^2 = ∑_m=0^M-1|∑_j∈ Ef(j)e^-2π i m/Mjχ_E_l_0(j)|^2; = ∑_m=0^M-1|∑_j∈ E_l_0f(j)e^-2π i m/Mjχ_E_l_0(j)|^2; = ∑_m=0^M-1|⟨ f,e^2π i m/M.χ_E_l_0⟩|^2. ]
Hence {e^2π i m/M.χ_E_l_0}_m∈ℕ_M is a tight frame for ℓ^2(E_l_0) with frame bound M. And this for all l_0∈ℕ_L.
(2)⟹ (1) Assume (2). Let h∈ℓ^2(E). For all l∈ℕ_L, define h_l∈ℓ^2(E_l) as the restriction of h en E_l, i.e h_l=h|_E_l. We have ∑_j∈ E| h(j)|^2=∑_l∈ℕ_L∑_j∈ E_l| h_l(j)|^2, then h^2=∑_l∈ℕ_Lh_l^2. It is also clear that ⟨ h,e^2π i m/M.χ_E_l⟩=⟨ h_l,e^2π i m/M.χ_E_l⟩. Since For all l∈ℕ_L, {e^2π i m/M.χ_E_l}_m∈ℕ_M is a tight frame for ℓ^2(E_l) with frame bound M, then for all l∈ℕ_l, we have Mh_l^2=∑_m∈ℕ_M|⟨ h_l,e^2π i m/M.χ_E_l⟩|^2. Hence:
Mh^2=∑_l∈ℕ_L∑_m∈ℕ_M|⟨ h,e^2π i m/M.χ_E_l⟩|^2.
This for all h∈ℓ^2(E_l). Hence {e^2π i m/M.χ_E_l}_m∈ℕ_M, l∈ℕ_L is a tight frame for ℓ^2(E) with frame bound M.
Let L,M∈ℕ and E_0, E_1, …, E_L-1⊂ℤ be mutually disjoint. Denote E=⋃_l∈ℕ_LE_l. Then the following statements are equivalent:
* {e^2π i m/M.χ_E_l}_m∈ℕ_M, l∈ℕ_L is a tight frame for ℓ^2(E) with frame bound M.
* {e^2π i m/M.χ_E_l}_m∈ℕ_M, l∈ℕ_L is complete in ℓ^2(E).
* For all l∈ℕ_L, E_l is Mℤ-congruent to a subset of ℕ_M.
* For all l∈ℕ_L, ∑_k∈ℤχ_E_l(.+kM)≤ 1 on ℤ.
It is a direct result of lemma <ref>, lemma <ref> and lemma <ref> together.
The following proposition presents a characterization for the admissibility of𝕊to admit a complete multi-window Gabor system𝒢(g, L, M, N).
Then the following statements are equivalent:
* There exist g:={g_l}_l∈ℕ_L⊂ℓ^2(𝕊) such that 𝒢(g,L,M,N) is complete in ℓ^2(𝕊).
* For all j∈ℕ_M/q, we have:
card(𝒦_j)≤ qL.
* (5) holds for all j∈ℤ.
By remark <ref>, card(𝒦_j) is M/q-periodic. Then (2)⟺ (3).
(1)⟹ (2) Assume (1). Let j∈ℕ_M/q. Then by proposition <ref>, card(𝒦_j)=rank(Z_g(j,θ) )≤ qL since Z_g(j,θ)∈ℳ_qL,p.
(2)⟹ (1) Assume (2). By proposition <ref>, it suffices to find a matrix-valued function Z:ℕ_M/q× [0,1[→ℳ_qL,p such that Z(j,.)_r,k∈ L^2([0,1[) for all (j,r,k)∈ℕ_M/q×ℕ_qL×ℕ_p and for all j∈ℕ_M/q, if k∉𝒦_j, the k-th column of Z(j,.) is identically zero, and such that rank(Z(j,θ) )=card(𝒦_j). Indeed, in this case, for all l∈ℕ_L, define Z_l as the matrix-valued function Z_l:ℕ_M/q× [0,1[→ℳ_q,p defined for all j∈ℕ_M/q, θ∈[0,1[ by Z_l(j,θ):=Z(j,θ)_lq≤ r≤ (l+1)q-1, 0≤ k≤ p-1. Then by lemma <ref>, there exists a unique g_l∈ℓ^2(𝕊) such that Z_g_l=Z_l. Denote g={g_l}_l∈ℕ_L, then for all j∈ℕ_M/q, θ∈ [0,1[, Z(j,θ)=Z_g(j,θ), hence by poposition <ref>, 𝒢(g,L,M,N) is complete in ℓ^2(𝕊) since rank(Z_g(j,θ) )=card(𝒦_j).
For the existence of a such matrix-valued function: Let j∈ℕ_M/q and a.e θ∈ [0,1[. Define a qL× p constant matrix-valued function Z(j,.):=(Z^0(j,.),Z^1(j,.),…, Z^p-1(j,.) ) on [0,1[, where Z^k(j,.) is the k-th column of Z(j,.) for k∈ℕ_p, such that Z^k(j,.)=0 if k∉𝒦_j and {Z^k(j,.), k∈𝒦_j} is linearly independent in ℂ^qL. This is possible since card(𝒦_j)≤ qL. Then for a.e θ∈ [0,1[, rank(Z(j,θ) )=card(𝒦_j). Hence we obtain the desired matrix-valued function Z.
The following statements are equivalent:
* There exist E_0, E_1, …, E_L-1⊂ℤ mutually disjoint such that 𝒢( {χ_E_l}_l∈ℕ_L, L,M,N) is a tight frame for ℓ^2(𝕊) with frame bound M.
* For all j∈ℕ_M/q, we have:
card(𝒦_j)≤ qL.
* (5) holds for all j∈ℤ.
By remark <ref>, card(𝒦_j) is M/q-periodic. Then (2)⟺ (3).
(1)⟹ (2) By proposition <ref>.
(2)⟹ (1) It suffices to find E_0,E_1, …, E_L-1⊂ℤ mutually disjoint such that for all l∈ℕ_L, E_l is Mℤ-congruent to a subset of ℕ_M and E:=⋃_l∈ℕ_LE_l is Nℤ-congruent to 𝕊_N. In fact, in this case, we have ℓ^2(𝕊)=⊕_n∈ℤℓ^2(E+nN) and, by lemma <ref>, {e^2π im/M.χ_E_l}_m∈ℕ_M,l∈ℕ_L is a tight frame for ℓ^2(E) with frame bound M. Then for all n∈ℤ, {e^2π im/M.χ_E_l(.-nN)}_m∈ℕ_M,l∈ℕ_L is a tight frame for ℓ^2(E+nN) with frame bbound M. Hence, by similar arguments used in the proof of lemma <ref>, 𝒢( {χ_E_l}_l∈ℕ_L ,L,M,N):={e^2π im/M.χ_E_l(.-nN)}_n∈ℤ, m∈ℕ_M,l∈ℕ_L is a tight frame for ⊕_n∈ℤℓ^2(E+nN)=ℓ^2(𝕊) with frame bound M.
For the construction of the desired E_l:
Let j∈ℕ_M/q. Let K be the maximal integer satisfying Kq≤ card(𝒦_j). For all l∈ℕ_K, define 𝒦_j^l as the set of the (l+1)-th q elements of 𝒦_j, 𝒦_j^K as the set of the rest elements of 𝒦_j and for l∈ℕ_L-ℕ_K+1, take 𝒦_j^l=∅. For all l∈ℕ_L such that 𝒦_j^l≠∅, write 𝒦_j^l:={k_l,j,i: i∈ℕ_card(𝒦_j^l)} and choose {r_l,j,i: i∈ℕ_card(𝒦_j^l)}⊂ℕ_q such that r_l,j,i≠ r_l,j,i' if i≠ i'. This choice is guaranteed since car(𝒦_j^l)≤ q for all l∈ℕ_L. For all l∈ℕ_L, define:
E_j^l={[ ∅ if 𝒦_j^l=∅; {j+k_l,j,iM-r_l,j,iN: i∈ℕ_card(𝒦_j^l)} otherwise. ].
Take for all l∈ℕ_L, E_l:=⋃_j∈ℕ_M/qE_j^l.
→ Let's show that for all l∈ℕ_L, E_l is Mℤ-congruent to a subset of ℕ_M. Let l∈ℕ_L. For this, it suffices to show that for all j∈ℕ_M/q, i∈ℕ_card(𝒦_j^l), we have: M| (j+k_l,j,iM-r_l,j,iN)-(j'+k_l,j',i'M-r_l,j',i'N) ⟹ j=j' and i=i'.
Let j,j'∈ℕ_M/q and i,i'∈ℕ_card(𝒦_j^l) and suppose that M| (j+k_l,j,iM-r_l,j,iN)-(j'+k_l,j',i'M-r_l,j',i'N). Then M| j-j'+(k_l,j,i-k_l,j',i')M-(r_l,j,i-r_l,j',i')N.
Put s=M/q, then M=sq and N=sp. Thus sq| j-j'+(k_l,j,i-k_l,j',i')sq-(r_l,j,i-r_l,j',i')sp, then s|j-j', hence j=j' since j,j'∈ℕ_s. On the other hand, we have sq|(r_l,j,i-r_l,j,i')sp, then q|(r_l,j,i-r_l,j,i')p, thus q|r_l,j,i-r_l,j,i' since p∧ q=1, hence r_l,j,i=r_l,j,i' since r_l,j,i,r_l,j,i'∈ℕ_q. And then i=i'.
Hence for all l∈ℕ_L, E_l is Mℤ-congruent to a subset of ℕ_M.
→ Let's prove now that E=⋃_l∈ℕ_LE_l is Nℤ-congruent to 𝕊_N. We show first that E is Nℤ-congruent to a subset of ℕ_N. For this, let (l,j,i), (l',j',i')∈ℕ_L×ℕ_M/q×ℕ_card(𝒦_j) and suppose that N| (j-j')+(k_l,j,i-k_l',j',i')M-(r_l,j,i-r_l',j',i')N. Put s=M/q, then M=sq and N=sp. Thus sp| j-j'+(k_l,j,i-k_l',j',i')sq-(r_l,j,i-r_l',j',i')sp, then s|j-j', hence j=j' since j,j'∈ℕ_s. On the other hand, we have sp|(k_l,j,i-k_l',j,i')sq, then p|k_l,j,i-k_l',j,i', hence k_l,j,i=k_l',j,i'
since k_l,j,i,k_l',j,i'∈ℕ_p. Then l=l' and i=i' by definition of the elements k_l,j,i. Thus E is Nℤ-congruent to a subset of ℕ_N. Observe that E⊂𝕊, then E is Nℤ-congruent to a subset of 𝕊_N. By what above, we have, in particular, that the E_j^l are mutually disjoint (and also the E_l are mutually disjoint). Then [ card(E) = ∑_l∈ℕ_L∑_j∈ℕ_M/qcard(𝒦_j^l); = ∑_j∈ℕ_M/q card(𝒦_j); = ∑_j∈ℕ_M/q∑_n∈ℤχ_𝕊_N(j+M/qn) remark <ref>; = ∑_j∈ℤχ_𝕊_N(j); = card(𝕊_N). ]
Hence E is Nℤ-congruent to 𝕊_N.
The following result presents an admissibility characterization for𝕊to admit a multi-window Gabor (Parseval) frame𝒢(g,L,M,N).
The following statements are equivalent:
* There exist g:={g_l}_l∈ℕ_L⊂ℓ^2(𝕊) such that 𝒢(g,L,M,N) is a Parseval frame for ℓ^2(𝕊).
* There exist g:={g_l}_l∈ℕ_L⊂ℓ^2(𝕊) such that 𝒢(g,L,M,N) is a frame for ℓ^2(𝕊).
* For all j∈ℕ_M/q (for all j∈ℤ), we have:
card(𝒦_j)≤ qL.
We have (1) impies (2). And since a frame is in particular a complete sequence, then (2) implies (3) by proposition <ref>. And by proposition <ref>, (3) implies the existence of ∅≠ E_0,E_1,…,E_L-1⊂ℤ such that 𝒢( {χ_E_l}_l∈ℕ_L, L,M,N) is a tight frame for ℓ^2(𝕊) with frame bound M. Hence 𝒢( {1/√(M).χ_E_l}_l∈ℕ_L, L,M,N) is a Parseval frame for ℓ^2(𝕊).
The following proposition presents a characterization for the admissibility of𝕊to admit aL-window Gabor basis andL-window Gabor orthonormal basis𝒢(g, L, M, N).
The following statements are equivalent:
* There exist g:={g_l}_l∈ℕ_L⊂ℓ^2(𝕊) such that 𝒢(g,L,M,N) is an orthonormal basis for ℓ^2(𝕊).
* There exist g:={g_l}_l∈ℕ_L⊂ℓ^2(𝕊) such that 𝒢(g,L,M,N) is a Riesz basis for ℓ^2(𝕊).
* For all j∈ℕ_M/q (for all j∈ℤ), we have:
card(𝒦_j)= qL.
It is well known that (1) implies (2). Assume that 𝒢(g,L,M,N) is a Riesz basis for ℓ^2(𝕊), then by corollary <ref> we have for all j∈ℕ_M/q (∀ j∈ℤ) card(𝒦_j)≤ qL. And by proposition <ref>, we have card(𝕊_N)=LM. Then by lemma <ref>, we have card(𝒦_j)= qL. Hence (2) implies (3). Assume that card(𝒦_j)= qL. Then by corollary <ref>, There exists g:={g_l}_l∈ℕ_L⊂ℓ^2(𝕊) such that 𝒢(g,L,M,N) is Parseval frame for ℓ^2(𝕊). By lemma <ref>, we have that card(𝕊_N)=LM and then by proposition <ref>, 𝒢(g,L,M,N) is a Riesz basis for ℓ^2(𝕊), then is an orthonormal basis for ℓ^2(𝕊) (lemma <ref>). Hence (3) implies (1).
In the case of 𝕊=ℤ, we hace for all j∈ℕ_M/q, 𝒦_j=ℕ_p. Then the condition (3) in the corollary <ref> is equivalent to p≤ Lq which is equivalent to N≤ LM. Then we obtain the proposition 3.5 in <cit.>. And also the condition (3) in the proposition <ref> is equivalent to N=LM. Then we obtain the proposition 3.11 in <cit.>.
We finish this work by the following example:
In this example, we use the notations already introduced in what above.
Let M=3 and N=5. Let 𝕊={0,1,2,4}+5ℤ. It is clear that p=5 and q=3. Then M/q=1, then ℕ_M/q={0}. We have clearly 𝒦_0={0,2,3,4}. Then card(𝒦_0)=4>q. Then, by corollary <ref>, there does not exist a Gabor frame with a signe window for ℓ^2(𝕊), but by the same corollary, we can always find a Multiwindow Gabor frame for ℓ^2(𝕊) with L-window for all L⩾ 2 since card(𝒦_j)=4≤3×2=6. Here is an example of 2-window Gabor frame for ℓ^2(𝕊).
Define g_0:=χ_{-1,0,1} and g_1:=χ_{-4,4,12}, since -1,0,1,-4,4,12∈𝕊, then g_0,g_1∈ℓ^2(𝕊). Observe, also, that 𝕊={0,1,2,4,5,-,7,9,10,11,12,14}+15ℤ. Then we have g_0 vanishes on {-10, -5, -4, 2, 3, 4, 6, 7, 8, 9, 12, 13}+15ℤ⋃{-1,0,1}+15(ℤ-{0}), and g_1 vanishes on {-10, -5, -1, 0, 1, 2, 3, 6, 7, 8, 9, 13}+15ℤ⋃{-4,4,12}+15(ℤ-{0}). Then, after a simple computation, we have for a.e θ∈ [0,1[:
[ Z_g_0(0,θ)=[ 1 0 0 0 0; 0 0 1 0 0; 0 0 0 1 0 ],
Z_g_1(0,θ)=[ 0 0 0 0 1; 0 0 0 1 0; 0 0 1 0 0 ], ]
Then for all x:={x_k}_k∈5, we have: ⟨ Z_g_0(0,θ)^*Z_g_0(0,θ)x,x⟩ =| x_0|^2+| x_2|^2+| x_3|^2 and ⟨ Z_g_1(0,θ)^*Z_g_1(0,θ)x,x⟩ =| x_2|^2+| x_3|^2+| x_4|^2. Then ⟨ Z_g_0(0,θ)^*Z_g_0(0,θ)x,x⟩ +⟨ Z_g_1(0,θ)^*Z_g_1(0,θ)x,x⟩
=| x_0|^2+2| x_2|^2+2| x_3|^2+| x_4|^2. Since ⟨𝒦(0)x,x⟩=| x_0|^2+| x_2|^2+| x_3|^2+| x_4|^2, then we obtain:
⟨𝒦(0)x,x⟩≤⟨ Z_g_0(0,θ)^*Z_g_0(0,θ)x,x⟩ +⟨ Z_g_1(0,θ)^*Z_g_1(0,θ)x,x⟩≤ 2⟨𝒦(0)x,x⟩.
Hence, by proposition <ref>, 𝒢({g_0,g_1}, 2,3,5) is a 2-window Gabor frame for ℓ^2(𝕊) with frame bounds 3 and 6.
§ ACKNOWLEDGMENTS
It is our great pleasure to thank the referee for his careful reading of the paper and for several helpful suggestions.
§ ETHICS DECLARATIONS
§.§ Availablity of data and materials
Not applicable.
§.§ Conflict of interest
The authors declare that they have no competing interests.
§.§ Fundings
Not applicable.
991 O. Christensen, An Introduction to Frames and Riesz Bases, 2nd ed., Birkhäuser, 2016.
2 C. Heil, A Discrete Zak Transform, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 1999, pp. 1465-1468.
3 N. Khachiaa, M. Rossafi and S. Kabbaj, Multi-window Gabor frames on discrete periodic sets, arXiv:2407.05495v1 [math.FA] 07 Jul 2024.
4 Y.-Z. Li and Q.-F. Lian, Gabor systems on discrete periodic sets, Sci. China Ser. A,52(2009), 1639-1660.
5 Y.-Z. Li and Q.-F. Lian, Tight Gabor sets on discrete periodic sets, Acta Appl. Math., 107 (2009), 105-119.
6 Y.-Z. Li and Q.-F. Lian, Multiwindow Gabor frames and oblique Gabor duals on discrete periodic sets, Sci. China Math., 54(2011), 987-1010.
7 Q.-F. Lian, J. Gong and M.-H. You, Time domain characterization of multiwindow Gabor systems on discrete periodic sets, Indian J. Pure Appl. Math., 44(1):47-76, February 2013.
8 Q.-F. Lian and Y.-Z. Li, The duals of Gabor frames on discrete periodic sets, J. Math. Phys., 50(2009), 013534, 22pp. |
http://arxiv.org/abs/2409.03012v1 | 20240904181035 | Design and Evaluation of Camera-Centric Mobile Crowdsourcing Applications | [
"Abby Stylianou",
"Michelle Brachman",
"Albatool Wazzan",
"Samuel Black",
"Richard Souvenir"
] | cs.HC | [
"cs.HC",
"cs.CV"
] |
Saint Louis University
Saint Louis
MO
USA
[email protected]
IBM Research
Cambridge
MA
USA
[email protected]
Temple University
Philadelphia
PA
USA
[email protected]
Temple University
Philadelphia
PA
USA
[email protected]
Temple University
Philadelphia
PA
USA
[email protected]
§ ABSTRACT
The data that underlies automated methods in computer vision and machine learning, such as image retrieval and fine-grained recognition,
often comes from crowdsourcing.
In contexts that rely on the intrinsic motivation
of users, we seek to understand how the application design affects a user's
willingness to contribute and the quantity and quality of the data they capture.
In this project, we designed three versions of a camera-based mobile crowdsourcing
application, which varied in the amount of labeling effort requested of the user
and conducted a user study to evaluate the trade-off between the level of
user-contributed information requested and the quantity and
quality of labeled images collected. The results suggest that higher
levels of user labeling do not lead to reduced contribution.
Users collected and annotated the most images
using the application version with the highest requested level of
labeling with no decrease in user satisfaction. In preliminary experiments, the additional
labeled data supported increased performance on an image retrieval task.
Design and Evaluation of Camera-Centric
Mobile Crowdsourcing Applications
Richard Souvenir
September 9, 2024
=========================================================================
§ INTRODUCTION
Modern machine learning applications rely on example data, which is often acquired and labeled by people.
The steady increase in the
performance of these methods has been fueled by a corresponding growth in the availability of
high-quality labeled data.
For the case of images,
mobile devices are the primary modality for capturing and submitting relevant photographs.
In addition to the images themselves, these camera-centric mobile applications often request user-provided labels or annotations.
Designing these applications effectively can be quite challenging, especially in the context of citizen science <cit.> applications,
which rely on the intrinsic motivation of users to contribute to projects via crowdsourcing. The design of mobile crowdsourcing applications
should ensure that the image capture and label process does not discourage users, yet still results in effectively labeled data.
There is a general consensus that the on-boarding process, or amount of effort
required for a user to start using a crowdsourcing application should be minimized
(e.g., by not requiring user signups <cit.> or keeping tasks small and easy to
understand <cit.>).
However, it is not clear
that this should be a universal guideline. In the case of crowdsourcing applications,
minimizing user effort may limit the type and/or amount of data that they might actually
be willing to contribute. Recent work suggests that requesting more effort from contributors does not actually
lead to lower engagement or user satisfaction, as in the context of audio labeling <cit.>. There has been little research investigating this phenomenon for the increasingly-ubiquitous
class of camera-centric mobile applications, such as popular citizen applications used for bird watching <cit.> or environmental studies <cit.>.
In this work, we compare three designs of a camera-centric mobile application, as shown in Figure <ref>, which
differ based on the type of information requested:
Unlabeled For the baseline method, the user takes a picture, and no additional information beyond the scene captured in the image is required.
Weakly Labeled “What is in the image?” In addition to image capture, the user is asked to name (or classify) the objects contained in the scene by selecting from a pre-defined list.
Strongly Labeled “What is in the image and where is it?” The user is asked to identify the location of particular objects in the image by either changing the focus area in the application to outline a particular object or capturing the image in a way that the object of interest is within the focus area boundaries.
This categorization aligns with popular paradigms in machine learning: unsupervised learning processes unlabeled training data,
weakly supervised data involves training data whose annotations are limited in some manner,
and strongly supervised (or more commonly, simply supervised) learning makes use of fully annotated training data.
We conducted a user study to evaluate the trade-offs between the requested level of labeling, the quantity and quality of
labeled images collected by participants, and user satisfaction with the different application variants.
The results of this study suggest that, for the case of camera-centric crowdsourcing mobile applications, higher levels
of labeling effort do not lead to less engagement or user satisfaction. In fact, we observed the opposite; users collected
and annotated the most images using the application version with
the highest level of labeling with no decrease in user satisfaction. These findings could help to inform the design and
implementation of of mobile crowdsourcing applications.
§ BACKGROUND
Crowdsourcing leverages the knowledge and understanding of the crowd to generate, annotate, and/or analyze data.
Commonly, such tasks are outsourced to online marketplaces such as
Amazon Mechanical Turk (AMT)[<https://mturk.com>], where
a distributed collection of workers are paid a small fee to complete
a well-defined task <cit.>. Other campaigns, which
fall under the umbrella of citizen science or participatory sensing,
rely on volunteers, who often participate out of their own personal
or scientific interest <cit.>. Recent work suggests
that these intrinsic motivations (e.g., altruism, moral obligation, sense of social good, curiosity) can be as compelling as financial compensation <cit.>.
In the fields of computer vision and machine learning, there is a long history of leveraging human expertise to provide training data for automated, learning-based algorithms <cit.>. This includes both the collection of imagery, as well as task-specific annotations providing information about the images (e.g., scene classification labels <cit.>, object classification labels <cit.>, object bounding boxes <cit.>, per-pixel image
labels <cit.>, and image and object attribute labels <cit.>).
There are a number of camera-centric mobile applications designed to collect images and annotations from users. WildMe's Flukebook application[<https://www.flukebook.org/>] allows users to submit images of whales and dolphins in order to identify particular animals and also estimate population sizes and motion patterns. In <cit.>, there is a similar census of zebras and giraffes using over 50,000 user-contributed images. Fieldguide[<https://fieldguide.ai>] applies deep convolutional neural networks to predict the species in user contributed imagery, and relies on experts to find errors and update the predictive models (Figure <ref> (left)). The Picture Post application[<https://picturepost.unh.edu>] allows users to identify locations where a 3D printed “picture post” has been set up to capture aligned imagery for time-series studies. In these examples, the user only contributes the captured images; the data is unlabeled.
While the majority of available camera-centric applications fall under the unlabeled paradigm, there are some camera-centric applications that request more effort beyond image capture. Weak labeling applications such as iNaturalist <cit.> (Figure <ref> (middle)), IveGotOne <cit.> and eBird <cit.> request not only the picture, but an annotation of what plant or animal was captured. In the PlantNet application <cit.>, users first provide a plant photo, annotate the parts (e.g., leaves, flowers, stems, etc.), and then asked to validate the prediction of the plant provided by a pre-trained machine learning model. BScanner <cit.> is a mobile application to crowdsource an image dataset of outdoor locations annotated by their accessibility to aid, for example, those using a wheelchair or the visually impaired. Users provide not only images, but also identify observed accessibility problems using a dropdown menu. These applications often provide a predefined list of choices to simplify user input. In this category, there are two main operations (capturing and labeling) and, depending on the task, could be prescribed in either order: label-then-capture or capture-then-label. For label-then-capture applications, each image typically contains a single object of interest. The capture-then-label model is more amenable to labeling more than one object per image. No matter the order of operations or number of objects per image, weakly labeled images are characterized by metadata which includes the name(s) or type(s) of object(s) of interest captured in the image.
Strong labeling can take on many forms. Obtaining the classification and location of an object in an image can be accomplished by requesting the user to capture the image in a particular manner or providing annotation tools after the image has been captured. RePhoto <cit.> (Figure <ref> (right)) is one such application, which presents to users a semi-transparent overlay and asks them to align the camera to capture as similar of an image as possible. Other types of strong labeling tasks involve having the user provide further details about the object or scene (e.g., object details, weather, other explanations). The SeeClickFix application allows community members to report problems in their community such as potholes or illegal dumping of trash, along with photos, responses to prompts and text descriptions of the problem shown in the image in order to help improve their community. Other examples include citizen science projects hosted by Zooniverse[<https://www.zooniverse.org/projects>], such as the Galaxy Zoo, which asks users to describe, identify, and differentiate between different galaxies captured by telescopes and the Wild Gabon project, which asks users to draw bounding boxes around a variety of different species in images from Gabon.
This organization (unlabeled, weakly labeled, strongly labeled) provides a categorization for a wide variety of camera-centric applications. Each category could be subdivided further based on finer-grained design decisions. To ground the evaluation of different camera-centric mobile application paradigms, we consider an application designed to collect data for indoor scene identification, specifically images from hotel rooms.
§ APPLICATION DESIGN
Our study is centered on a mobile application designed to provide data to aid in human trafficking investigations <cit.>,
where images are often important pieces of evidence, as they often contain clues about where victims have been trafficked. Much of this photographic evidence is captured in
hotel rooms. The mobile application allows travelers who want to help combat human trafficking to contribute photos of their hotel room. These images are added to a database that also includes images from publicly available travel websites (e.g., Expedia, TripAdvisor).
This database of images serves as training data for
a learning-based reverse image search engine where investigators can submit photographic evidence in order
to determine the hotel where a victim was photographed. As with most machine learning systems, additional training data both in terms of quantity and variability is generally beneficial for improving performance.
The original version of the application fell into the unlabeled paradigm; users are asked to provide images of hotel rooms without any constraints or
additional annotations. Given that hotel rooms generally contain a collection of
common objects (e.g., bed, lamp, chair), images with object labels would increase
the utility of the AI platform to investigations by supporting more complex object-centric queries by the users of the platform. For example, an investigator may notice a particularly unique lamp in a victim image and want to search for any images with visually similar lamps (regardless of the other objects in the image).
The application could be extended to incorporate object labeling from the engaged user base already providing images to support these types of investigations. However, attracting and maintaining contributors is an important consideration for any crowdsourcing application, so new designs should not decrease motivation or interest in contributing.
To better understand this issue, we designed three variants of the mobile application for capturing images of hotel rooms and identifying objects in the scene. We will refer to these as: Unlabeled (UL), Weakly Labeled (WL), and Strongly Labeled (SL). In this section, we describe our design decisions.
The application launches with an introduction to the application and its purpose. The next screen shown to the user provides instructions about how to use the specific version of the interface. After viewing the introduction and instructions, the user will capture and (depending on the version) label images using the interfaces seen in Figure <ref>. After the user is satisfied with the images captured, the application requests hotel information while uploading images and metadata to a server in the background.
Unlabeled (UL) The user is
instructed to take pictures in the hotel room without reference to
any specific objects in the scene or
options for additional labeling, as shown in Figure <ref>. The user can capture images
and/or delete captured images. Similar to other applications in this class, the
collected
data includes only the
image data and automatically collected metadata (e.g., date, time, GPS location).
Weakly Labeled (WL) The user captures a photo and labels the objects in the photo from a list. The user first captures an image in
the same manner
as in the UL version. Post-capture, a dialog appears with a checklist of common hotel items as shown in Figure <ref> (e.g., bed, lamp, sink). For each image captured, the user identifies the visible items from the list and also has the option to indicate other items. In this case, the collected metadata includes the list of visible objects, but no position information. The WL version results in a
collection of weakly labeled (i.e., only object names) images.
Strongly Labeled (SL) The user identifies the
object and location of the object in the photo they take. The camera button is
part of a swipe-able array of choices corresponding to the same predefined set of hotel objects
as in WL application. This design was partly inspired by popular camera-based mobile applications (e.g., how a user would choose a face lens or filter in Snapchat).
When the user selects an item,
a reticle (i.e., target, bounding box, focus area) is displayed over the view area on the screen outlining an area of interest, as shown in Figure <ref>.
The application provides a default target area for each object, based on the typical
size and location of objects. To align objects, the user can choose to (1) resize or
move the reticle or
(2) change locations or the camera angle to bring the object into the target area. The SL version results in images annotated with object names and their locations.
§ EXPERIMENT
We conducted a study to compare the differences in user contributions between the three interfaces. We ran a between-subjects design with the application version (i.e., labeling level) serving as the independent variable.
§.§ Study Protocol
The experiment was carried out in a hotel on campus in one
of two nearly identical hotel rooms. The experimenter provided the following instructions: “Today, you will serve as a traveler using our mobile application in this hotel. Follow the instructions in the application. When you're done, meet me at [location] and I'll collect your feedback.” Participants were provided with a smartphone with one of the (randomly-selected) variants of the application pre-loaded. The participants were not provided any limits, requirements, or suggestions on time nor quantity of photographs (or objects) to capture. Immediately after image capture, the investigator collected the smartphone and participants were directed to a laptop in the hotel lobby to complete a brief survey of the experience. The entire experiment session could be completed in less than 5 minutes.
§.§ Participants
We recruited participants primarily from a university setting via word-of-mouth by researchers outside of the on-campus hotel. Participants were required to be over 18 years old, have normal
or corrected-to-normal vision, and be able to read and understand English.
A total of 100 people were recruited to participate in the study (49 male, 51 female). The mean
age was 21.85 (SD = 4.09).
We asked participants to rate how often they use camera-based smartphone
applications (e.g., Snapchat, Instagram, etc.) on a scale of 1 = “Never/Rarely”
to 7 = “Often” and whether
they were familiar with the application. On average, this cohort of mainly
college-aged participants self-rated as highly familiar with camera-based
smartphone applications (M = 6.25, SD = 1.42). The vast majority
of participants
(83 out of 100) had not heard of the project. Of those who had, none
had previously used any version of the mobile application.
Though the actual application is voluntary, participants in the user study were compensated for their
time with a $5 gift card to a campus coffee shop.
§.§ Data
We measured user interactions with the application, user satisfaction ratings, and image composition and annotation quality across the three application variants.
§.§.§ User Interactions
For each participant, an event log was recorded, detailing each action performed during the study. Recorded actions included capturing or deleting images, using help screens, and annotating the images. We also recorded how long participants spent using the application. Due to a technical error, the event log from one participant was corrupted, so it is excluded from event-based analysis.
§.§.§ User Satisfaction Ratings
We measured user satisfaction through a post-experiment survey. In addition to gathering demographic information, the questions included rating the interface on a 7-point Likert-type scale on the following criteria: overall quality, instruction quality, and likelihood to recommend to others.
§.§.§ Image Composition & Annotation Quality
All of the images captured during the experiment were evaluated by three annotators. For each image, the annotator marked a bounding box around visible objects from the predefined list. Ground truth annotations were defined as those where at least two annotators agreed on the classification and the bounding boxes significantly overlapped (i.e., Intersection-over-Union (IoU) > .7). The annotators were only provided the captured image and were unaware of application variant used for capture.
§ RESULTS
We compared our measures across the three conditions (UL, WL, and SL).
We tested the normality of our data using the Shapiro-Wilk test. Because our data was non-normal, we used the Kruskal-Wallis test to compare the three conditions on survey and log data and the multiple comparison test after the Kruskal-Wallis test for post-hoc comparisons on significant results. We used epsilon squared for the effect size <cit.>.
§.§ Number of Photos Taken
Figure <ref> (top) shows the number of pictures captured across the three application variants.
We found that participants took the most pictures with the
SL application variant.
We found an overall significant difference between the three application versions
for the number of pictures taken (H(2) = 18.63, p < 0.001) with a medium effect size (ϵ^2 = 0.19).
Because we found a significant difference across the three conditions, we did a post-hoc follow-up test.
Comparisons of the mean ranks between groups showed that there was a
significant difference (p < 0.05) in the
number of pictures taken between the UL (Mean(M) = 7.44, Standard Deviation(SD) = 4.5) and SL conditions (M = 5.8, SD = 2.67) (difference = 20.19, critical difference = 16.8). There was also a significant difference (p < 0.05) between the
WL and SL conditions (M = 9.67, SD = 4.38) (difference = 29.92, critical difference = 17.06).
§.§ Task Times
Figure <ref> (bottom) shows the total time the participants spent capturing and/or labeling the image. The duration includes the time spent capturing and labeling images after viewing the instructions. We found a significant difference between task times for the different application versions
(H(2) = 19.54, p < 0.001) with a medium effect size (ϵ^2 = 0.2). Participants spent on average
75.58 seconds in the UL condition (SD = 44.42s), 125.08 seconds in the WL condition (SD = 55.52s),
and 116.07 seconds in the SL condition (SD = 50.41s). In the WL condition, we are able to distinguish
the amount of time participants spent labeling the images because the capturing and labeling
activities are mutually exclusive. Participants in the WL condition spent on average 62.04 seconds labeling (SD = 23s), which is almost half of their total task time. Because we found a significant difference across the three conditions, we did a post-hoc follow-up test. Comparisons of the mean ranks
between groups showed that there were significant differences (p < 0.05) in task time
between the UL and WL conditions (difference = 27.98, critical difference = 16.9) and between the
UL and SL conditions (difference = 25.25, critical difference = 16.6). There was no significant
time difference between the WL and SL conditions (difference = 2.74).
§.§ Satisfaction Ratings
The users were asked to rate their overall satisfaction with the application, quality of the instructions, and the likelihood
of recommending the application to others on a 7-point Likert-type scale. We found no significant difference between
conditions on participants' overall application rating (H(2) = 5.71, p = 0.057), rating of the instructions (H(2) = 2.76, p = 0.25),
or recommendation for the application(H(2) = 3.39, p = 0.18). Overall, participants provided positive ratings of their overall
satisfaction (M = 5.5, SD = 1.1), instruction quality (M = 5.76, SD = 1.35), and likelihood of recommending the application (M = 5.78, SD = 1.45).
§.§ Image Composition
In addition to understanding whether users are motivated to take pictures,
crowdsourcing applications rely on their users providing useful and relevant data.
For this application, the goal is to understand how the different annotation paradigms
changed the types of pictures taken.
Figure <ref> shows sample images captured by the study participants
using each application variant. For the Unlabeled (UL) case, only the image is captured.
For the Weakly Labeled (WL) case, the inset shows the labels of objects selected by
the user. For the Strongly Labeled (SL) case, the label and bounding box correspond to the settings selected by the user.
Qualitatively, some visual differences can be observed in the size and positioning of objects
based on whether or not the instructions explicitly referenced objects. To quantify
this phenomenon, we computed both the number of visible objects captured and the
relative size of those objects in the image frame to serve as proxy measures
for image composition. For all the images, across the three conditions, we used
the ground-truth annotations provided by the external annotators (Section <ref>) to compute
the image composition measures.
The plot in Figure <ref> shows the average number of hotel objects (from the predefined list)
photographed per user by application type. In addition to having taken the most images,
the SL users also captured the most instances of different object types.
In particular, there is a large increase in the number of lamps and chairs photographed.
We also computed the average fraction of the image that each object takes up as a function of application type. It is unsurprising that users of the WL and SL application versions capture images with more pixels on the objects
in general. This supports the finding that the pictures in these conditions were more focused on specific objects, rather
than pictures in the UL condition that focused more on the overall scene. We found that the fraction of the
images taken up by objects differed significantly with medium to large effect sizes across conditions for all objects
except bed, as shown in Table <ref>. This result for beds is reasonable because the relative size of a bed in an image when standing in a small
hotel room is tightly bounded by the limited amount of space to obtain viewpoints of a large object from different distances.
§ DISCUSSION
The goal of this study was to better understand how variations in the design of a camera-centric application can affect the quantity and quality of images and annotations captured by users and user satisfaction with using the application. Towards this end, we evaluated (1) user engagement, as measured by the number of images captured and time spent using the application, (2) the properties of the collected data, and (3) user satisfaction based on the ratings of their experience using the application. In this section, we provide our observations from the results and discuss potential limitations of our experiment.
§.§ User Engagement
The aim of crowdsourcing applications is to collect as much (high-quality) data as
possible. For this scenario, that translates to users choosing to capture more images.
The biggest, and most surprising, take-away from this study is that users in SL condition,
which required the most effort and time for image annotation, captured the most images with
the highest variety of objects. Moreover, there was no significant difference
in the
user satisfaction ratings or willingness to recommend the application to others, even
though they spent more time, on average, at the task. While we do not take these results to
imply that the users were equally satisfied across the three conditions, it is noteworthy that
we did not observe a significant negative correlation between the requested effort and user satisfaction.
These results support the notion that for a crowdsourcing task where the users
are intrinsically motivated, following the mantra of simplifying the level of
effort required at the expense of obtaining a higher quantity or quality of data
may not be warranted.
§.§ Properties of Collected Data
The annotation quality results indicate that, for users in this study, high-quality
data can be obtained across annotation paradigms. Downstream algorithms for
computer vision and machine learning only benefit from additional classification
and/or localization annotations in user-provided data. However, it is
important to note that changing the annotation paradigm affected the type
of images that users captured. We observed differences in both the number of objects captured
and the relative sizes of those objects in the image. There were differences among
all three conditions, with UL and WL showing similarity for capturing larger objects (e.g., bed, toilet)
and SL showing differences across most of the object classes compared to both UL and WL.
One possible explanation is that the UL version does not reference objects at all and
the WL version only requests object identification after the image has been captured.
However, for the SL version, object selection and positioning is a part of the image
capture process. It is possible that the indirect prompting inherent to the SL variant
encouraged more photographs of less conspicuous objects (e.g., lamps and chairs) and
zooming in objects that would otherwise be in the background (e.g., art).
For camera-centric applications, these types of changes to the annotation paradigm may induce unintended
changes to type of data collected that should be considered by application
designers.
For example, converting a camera-centric mobile application from collecting
unlabeled to strongly labeled data may affect the visual appearance of “new” data compared to
previously collected data and require interventions for the downstream
algorithms.
§.§ User Feedback
In addition the numeric ratings in the post-experiment user survey, the participants could also provide free-form comments. Of the 100 participants, 44 left extra comments. Notable themes in the comments include (1) instruction clarity, (2) application intuitiveness, and (3) problem domain interest.
§.§.§ Instructions
Although the overall ratings of instructions were high, some users expressed dissatisfaction with the level of detail. In the free response portion of the post-experiment survey, 8 of the 44 users who provided free-form comments mentioned issues with the instructions, with the term `unclear' appearing most frequently in the comments. These comments were split relatively evenly across conditions: 2 for UL, 3 for WL, and 3 for SL. There were also comments that did not call out the instructions specifically, but mentioned not being sure exactly what to take pictures of, such as one participant who commented, “I wasn't sure if I was supposed to take specific pictures or what exactly would be most helpful.” Even if the lack of a detail is purposeful to avoid introducing bias, the style and content of the instructions are important design considerations that may confuse or frustrate users and impact continued participation in the crowdsourcing campaign.
§.§.§ Application Intuitiveness
While the application design was inspired by popular camera-centric mobile applications, there were implementation choices made to accommodate the annotation tools. Some of these differences were noted by users. Comments include a lack of access to the flash, zoom, and focus controls typically available with camera-based applications. One user additionally desired to ability to re-take their photo, noting “the picture took blurry a few times, and i couldn't retake a pic.” In the interest of reducing clutter and to capture click and swipe events related to image annotation, advanced camera controls (e.g., “pinch” to zoom) were not enabled, but their inclusion may improve image quality and align with users' expectations of a camera-centric mobile application.
§.§.§ Problem Domain Interest
Unlike commercial camera-centric mobile applications, crowdsourcing applications
can benefit
from user interest in the problem domain and desire to contribute. Multiple
participants in this study
expressed a desire for the application to share more information about the problem
domain
and how their contributions help the cause. One user stated, “I want to know more or be provided
about the current progress of human tracking through the app. I want to know I am contributing
to good cause.” These comments reinforce the notion that, depending on the problem domain,
contributors are often eager to volunteer their time and effort. Providing additional background
information on the problem domain and, where possible, feedback on the impact of the user's
contribution can encourage continued (and enthusiastic) use of the application.
§.§ Limitations
We identified several threats to validity based on the design of the experiment
and implementation choices.
§.§.§ Participant Population
While no participants had ever used any version of the application,
some participants were familiar with the project
through contact with some of the researchers (students, professors).
Participants were also compensated with a $5 gift card.
Either of these factors could lead to inflated ratings. While participants gave a
similar overall rating whether they were familiar (M = 5.5, SD = 0.97) or not familiar with the
project (M = 5.51, SD = 1.09), they did score their willingness to recommend the application slightly higher if they had familiarity (M = 5.94, SD = 1.7) than if they did not (M = 5.75, SD = 1.4).
Participants familiar with the application were relatively well spread across conditions:
5 of them used the UL version, 7 used the WL version and 4 of them used the SL condition.
Additionally, the participants consisted primarily of undergraduate students.
This population may have more time and
motivation to participate in this type of crowdsourcing, and the application, which
is intended to help combat human trafficking, may inspire a higher level of altruism due to the
subject matter. However, we expect that all of these factors would have affected the population
overall, rather than one condition or application type.
§.§.§ Variability of Labeling Implementations
The UL variant simply requires capturing images and adds little functionality beyond the built-in camera application common to
all smartphones. There are few design decisions to be made. However, the WL and SL variants fall into broader categories where choices such as the order of labeling and image capture and/or the number of objects to label per image can affect the design and implementation.
For the WL variant, we are reasonably confident that it aligns
with other applications in the weakly labeled paradigm and the different interaction modes (e.g., label-then-capture vs. capture-then-label) only constitute minor differences. However, as
previously mentioned, the strongly labeled paradigm is much broader. Applications
fitting this paradigm have employed opaque overlays, reticles or target areas (as with SL),
and other interaction widgets. Aligning an object in a scene with a marker in the viewfinder
can be accomplished by manipulating the marker or capturing the image from a different angle.
Additionally, some approaches rely on bounding boxes while others employ tools for pixel-segmentation
of objects. While we aimed to provide some amount of flexibility in our SL implementation (e.g., resizable reticle),
the results may not generalize to the wide variety of approaches for strongly labeling images.
§.§.§ Environment Constraints
Two additional limitations of this study were the fact that images were only captured in
one of two different (but nearly identical) rooms from the same hotel and the availability of objects did
not include all of the objects that might be expected. For example,
neither room contained a couch. Nonetheless, we expect that the general measurements of
user effort would not be significantly
impacted by either of these issues.
§.§ Labeling Alternatives
Rather than redesigning an existing (most likely unlabeled) camera-centric mobile application, one might consider alternatives for obtaining image labels. One option
would be to crowdsource the labeling task after the images have been captured
on a platform like Amazon Mechanical Turk. This option introduces additional
costs and
leaves the task to a different set of users who may not be as motivated as the
cohort that captured the images. Another approach to labeling involves the use of automated algorithms. While these methods are close to human-level performance, even state-of-the-art approaches still misidentify or oversegment objects. It is worth noting that our task involves objects (e.g., bed, chair) commonly included in the generic data sets used to train these methods. Even in this case, one of the automated methods did not include relatively common items (e.g., lamp, desk). This issue is only exacerbated for the specialized tasks for which camera-based crowdsourcing has been employed (e.g., litter, potholes, fine-grained animal or plant species recognition), limiting the utility of automated labeling approaches in these domains.
One last consideration for both of these alternatives is that our results show that
by mentioning the labeling task during image capture, users take more images and more images focused
on specific objects, which may or may not be desirable, depending on the task.
§.§ Real-World Deployment
Based on the results of the user study, the application was updated to incorporate the strongly labeled paradigm. Users could opt to submit images in the original manner (unlabeled, images only) or
a new mode similar to the SL paradigm with the object carousel and object reticle. Prior to the update, users provided 3.68 (SD = 1.49) images on average. Since the update, users have submitted an average of 10.06 (SD = 7.49) images. These distributions are shown in Figure <ref>. As the update introduced many changes, including the UI, instructions, and ease of uploading, it would not be fair to attribute this increase entirely to introducing the object-centric SL paradigm option. It is worth noting that users have the option to choose between providing unlabeled or labeled images, and 88.4% provided more strongly labeled images than unlabeled images.
§ OBJECT-CENTRIC IMAGE RECOGNITION
The data collected by the crowdworkers can be used to train automated methods for fine-grained recognition. Hotels-50K <cit.> is a benchmark dataset of more than a million images
from 50,000 hotels around the world. This hotel recognition task in this benchmark closely resembles the investigative task in human trafficking investigations, and the benchmark includes all of the challenges common to general fine-grained categorization and
others unique to hotel rooms.
There can be a high within-class variation; images from rooms from the same hotel can appear to be quite different. Also, there can be low between-class variation, particularly for images from hotels belonging to the same chain.
The objects visible in the hotel room images can help discriminate between rooms. Previous approaches focused on image-level matching, treating the entire image as input to the model. Figure <ref> highlights
the drawbacks of this approach; in (a), the two images, although visually quite similar, were captured from different hotels. Closer inspection of some of the objects (e.g., lamps, chairs, dressers) shows clear differences. In (b), the images are from the same hotel chain and contain the same objects (lamps, chairs), but the overall scenes are visually quite dissimilar.
As an alternative to image-centric approaches, we take an object-centric approach by representing the image as
a set of objects (e.g., lamps, beds, curtains) and applying a simple voting scheme for matching to a database of object crops. Using a ViT/S with 16x16 patch size <cit.>, we trained an image-based classifier, achieving a baseline accuracy of .551.
Using the same model, we extracted the image patches corresponding to the specific objects
in the scene; a patch token is associated with the object that occupies the majority of pixels within that patch. These patches were mean-pooled to serve as the object feature, and we used a majority-rule scheme to aggregate the predictions of all the objects in the image. This simple object-based strategy achieved an accuracy of .583, outperforming the standard image-based strategy by over 11%. This experiment highlights the benefit of the localized, labeled data provided by the crowdsourced users.
§ CONCLUSION
This paper contributes one of the first investigations into the trade-offs between labeling effort,
quality and quantity of data collected, and user satisfaction for a camera-centric
crowdsourcing application. The results suggested that motivated users may be willing
to do more than universally accepted design guidelines (i.e., “keep it simple”)
may suggest. Even in this limited study of three variants of an application for a particular task,
we observed a complex, interconnected relationship between the design choices, user expectations,
and user behavior. Finally, we demonstrated that data from a real world camera-centric crowdsourcing application can be used to improve image retrieval performance.
§ ACKNOWLEDGMENTS
The work described in this paper was supported by National Institute of Justice Award #2018-75-CX-0038 and National Science Foundation Award #1757533. We greatly appreciate the work of Gabe Aguilar and Madilyn Simons for developing the version of the mobile application used in the study and Nick Allan for drawing his illustrations for both the instructions and figures in this paper. We would also like to thank Jake Lawrence, David Zheng, and Kat Osadchuk for helping recruit participants during the user study.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.02508v1 | 20240904080821 | TLD: A Vehicle Tail Light signal Dataset and Benchmark | [
"Jinhao Chai",
"Shiyi Mu",
"Shugong Xu"
] | cs.CV | [
"cs.CV"
] |
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
TLD: A Vehicle Tail Light signal Dataset and Benchmark
Jinhao Chai, Shiyi Mu, Shugong Xu
======================================================
§ ABSTRACT
Understanding other drivers' intentions is crucial for safe driving. The role of taillights in conveying these intentions is underemphasized in current autonomous driving systems. Accurately identifying taillight signals is essential for predicting vehicle behavior and preventing collisions. Open-source taillight datasets are scarce, often small and inconsistently annotated. To address this gap, we introduce a new large-scale taillight dataset called TLD. Sourced globally, our dataset covers diverse traffic scenarios. To our knowledge, TLD is the first dataset to separately annotate brake lights and turn signals in real driving scenarios. We collected 17.78 hours of driving videos from the internet. This dataset consists of 152k labeled image frames sampled at a rate of 2 Hz, along with 1.5 million unlabeled frames interspersed throughout. Additionally, we have developed a two-stage vehicle light detection model consisting of two primary modules: a vehicle detector and a taillight classifier. Initially, YOLOv10 and DeepSORT captured consecutive vehicle images over time. Subsequently, the two classifiers work simultaneously to determine the states of the brake lights and turn signals. A post-processing procedure is then used to eliminate noise caused by misidentifications and provide the taillight states of the vehicle within a given time frame. Our method shows exceptional performance on our dataset, establishing a benchmark for vehicle taillight detection. The dataset is available at https://huggingface.co./datasets/ChaiJohn/TLD/tree/main
datasets, vehicle signal detection, taillight recognition, autonomous vehicle, Assistant and autonomous driving, object detection.
§ INTRODUCTION
In RECENT years, with the continuous development and maturation of autonomous driving technology, an increasing number of vehicles equipped with advanced driver-assistance systems (ADAS) have appeared on the roads. In densely populated urban traffic scenarios, quickly and accurately perceiving the behavioral intentions of surrounding vehicles—and making safer, more intelligent decisions based on this perception—has become a primary focus for many autonomous driving system researchers. In real-world driving, vehicle taillight signals, which play a crucial role in communicating driving intentions among human drivers, can be seen as a visual language used to indicate forthcoming actions, such as turning or lane-changing, to other road users. Therefore, enabling autonomous driving systems to interpret this vehicle light language is vital for a better understanding of driving intentions.
Unlike traditional methods of inferring driving intentions, which typically involve estimating the next move of other vehicles based on their direction and speed, these approaches can sometimes lead to erroneous judgments. More critically, such methods often introduce a delay in understanding driving intentions. For example, when a vehicle in the left lane intends to change lanes into the ego vehicle's lane, an aggressive lane change might cause the system to fail in predicting the intention, potentially triggering emergency braking (AEB) or even causing a collision. However, by incorporating the recognition of vehicle taillights, the system gains a predictive element in judging other vehicles' driving intentions. This is because vehicle taillight signals are designed to pre-announce a driver's next action to surrounding vehicles, with turn signals usually activated before the actual maneuver.
Furthermore, recent studies have shown that clear reasoning about the long-term goals <cit.> and short-term intentions <cit.> of other vehicles in a traffic scene can significantly improve the accuracy of trajectory predictions, directly benefiting downstream tasks in autonomous driving systems. Additionally, this type of optical visual signal can be considered a simple form of vehicle-to-vehicle (V2V) communication. In Figure <ref>, we provide examples where taillight signals play an indispensable role in various corner cases. For instance, on mountain roads or highways, truck drivers generally have a broader field of view compared to smaller cars, allowing them to detect potential hazards earlier. In such scenarios, the rear vehicle, due to visual obstructions, may miss certain areas of perception. The lead vehicle can use its taillight signals to indirectly fill in these perceptual blind spots, effectively helping the rear vehicle avoid potential collision risks.
Based on this, we believe that detecting and recognizing vehicle taillights can enable autonomous driving systems to better understand the driving intentions and interaction signals of surrounding vehicles, thus achieving safer and more reliable driving.
However, in practice, current autonomous driving systems often lack mature solutions for vehicle taillight detection, and the information provided by other vehicles' lights is not fully utilized. This could be because perceiving taillight signals presents certain challenges. Based on our analysis, we have identified the following challenges in recognizing taillight states:
* Varying Lighting Conditions. During the day, the red taillight cover on vehicles may reflect sunlight, making it appear as though the lights are on. At night, various light sources, such as streetlights and oncoming headlights, can interfere with taillight detection. Additionally, certain lighting conditions can introduce imaging noise, including strong reflections, halos, and shadows.
* Occlusions from Random Observation Angles. In congested traffic scenarios, such as waiting at a red light, the random nature of observation angles can lead to partial occlusions between vehicles, making it difficult to accurately judge taillight states.
* Non-uniform Taillight Shapes and light forms. The lack of unified standards among car manufacturers results in significant differences in taillight shapes across vehicles, including cars, vans, trucks, and buses. Moreover, some modern taillight designs, like strip lights that illuminate sequentially, pose additional challenges for detection.
* Inconsistent Taillight States Between Day and Night. Vehicles generally keep their lights off during the day but turn on the side lights at night. Since side lights are close to the brake lights, detecting brake lights becomes more challenging at night, making the high-mounted brake light a better detection choice.
* Temporal Sequence Issues. Turn signals typically blink to indicate activation, so determining the state of turn signals requires considering both the current frame and previous states in the time sequence. The overall state change in the time sequence ultimately determines the turn signal status.
In addressing these challenges, machine learning techniques have proven highly effective for pattern recognition <cit.>, especially given the need for large amounts of data. However, in the field of vehicle light recognition, this presents a limitation. There is an urgent need for a large-scale taillight dataset that encompasses various lighting conditions, weather scenarios, viewing angles, and vehicle types to train and evaluate deep learning models for taillight detection. To meet this need, we introduce TLD, a new large-scale vehicle light detection dataset. TLD not only covers diverse traffic scenarios under different global weather and lighting conditions but also includes a sufficient number of challenging samples. Additionally, we propose a two-stage vehicle light detection model as a baseline. Experimental results demonstrate that our model achieves commendable performance on the dataset.
In conclusion, our work makes two main contributions. First, we introduce TLD (TailLight Dataset), the first publicly available large-scale vehicle light detection dataset. This dataset includes 152,690 images with annotated taillight states, featuring decoupled annotations for brake lights, turn signals, and hazard lights. The comprehensive taillight state annotations and diverse real images in TLD provide a solid foundation for improving the detection and recognition of automotive taillight signals. This can assist ADAS systems in better understanding the driving intentions of surrounding vehicles, thereby benefiting downstream tasks for safer and more intelligent planning and decision-making. We believe that our dataset will support further research in vehicle light detection and driving intention prediction within the community.Second, we establish a new baseline on our dataset using a simple yet effective method for recognizing taillight states in various scenarios. Experimental results indicate that our method not only effectively detects and recognizes taillight states of surrounding vehicles but also demonstrates robustness across different times (day and night), weather conditions (sunny/rainy), and locations (urban areas, highways, tunnels, rural areas, etc.).
§ RELATED WORKS
In this section, we review some of the existing work in the field of vehicle light detection, which can generally be categorized into two approaches: image processing-based methods and deep learning-based methods. Since taillight colors are predominantly red, image processing methods often employ heuristic approaches using various color spaces such as HSV, YCrCb, Lab*, or Y'UV to detect red light regions in the rear of vehicles. These color space transformation methods extract candidate taillight regions by thresholding channels containing red components after converting the input images to different color spaces, and remove surrounding noise using morphological operations <cit.> or noise filtering <cit.>. Additionally, since taillights are symmetrically aligned along the vehicle's center axis, many methods use symmetry checks <cit.> and aspect ratio tests <cit.> to identify the most probable taillight regions and perform taillight status recognition. Liu <cit.> used RGB space and set color difference thresholds between adjacent frames to determine brake light operation. Some studies choose to process images in the Lab* color space, where L* represents brightness and a* and b* represent the color ranges for the red-green and yellow-blue axes, respectively. Chen et al.<cit.> utilized the a* component in the Lab* color space for binarization of brake light detection. Nava et al. <cit.> and Pirhonen<cit.> further explored color feature-based and morphological operation-based detection methods in the Lab* color space.
Apart from color space conversion, some studies have employed specific image processing techniques such as frequency domain analysis, brightness thresholds, and symmetry tests to enhance detection accuracy and robustness. For instance, Jen et al. <cit.> proposed a fast radial symmetry transform algorithm for daytime brake light detection. Cui et al. <cit.> developed a hierarchical framework using a deformable parts model to detect vehicles and applied clustering techniques to extract taillight candidate regions.
Recent advancements in deep learning have significantly advanced vehicle light detection. These methods train convolutional neural networks (CNNs) to automatically learn image features without manual feature design. Hsu et al.<cit.> introduced a CNN-LSTM structure capable of learning spatiotemporal features of vehicle taillights from video sequences, which not only improved detection accuracy but also enhanced the model's adaptability to dynamic changes. Lee et al. <cit.> further integrated attention mechanisms, focusing on key regions in images and critical time steps in sequences to substantially improve vehicle taillight recognition performance.
Some research has focused on taillight detection under specific conditions such as nighttime or adverse weather. For example, Duan-Yu Chen et al.<cit.> proposed a nighttime turn signal detection method based on Nakagami-m distribution, using scattering modeling and reflectance analysis to identify turn signal directions. Almagambetov et al. <cit.> introduced an algorithm utilizing Kalman filters and codebooks for automatic tracking of vehicle taillights, as well as detection of brake lights and turn signals, demonstrating robust performance under varying lighting and weather conditions.
Moreover, many studies focus on optimizing taillight detection performance through various technical approaches. For instance, O'Malley et al.<cit.> used taillight detection to improve the accuracy of nighttime vehicle detection. Skodras et al.<cit.> and Thammakaroon et al.<cit.> enhanced taillight detection algorithms through morphological operations and brightness analysis. Guo et al. <cit.> and Jeon et al. <cit.> improved detection robustness using deep learning frameworks and multi-view information fusion, respectively.
The aforementioned methods primarily focus on single-frame image-based taillight detection. However, taillight recognition is closely related to the state and actions over time, as discussed in Chapter 1. Consequently, some methods incorporate time series analysis. Inspired by video action classification problems, researchers have attempted to apply common video action classification techniques for taillight state time series analysis, such as two-stream, CNN-LSTM, and 3D convolutional (C3D) networks. The two-stream method <cit.> uses RGB frames and multi-frame dense optical flow fields as inputs, processed through two CNNs to handle spatial and temporal information. CNN-LSTM <cit.> extracts spatial features from each frame using CNNs and learns temporal features with LSTM. C3D <cit.> extends 2D convolution with temporal domain convolution, processing both spatial and temporal information simultaneously.
In related research, probabilistic graphical models are often used to handle variable-length non-image data sequences. Probabilistic graphical models include Hidden Markov Models (HMM), Maximum Entropy Markov Models (MEMM), and Conditional Random Fields (CRF), and are widely applied in fields such as natural language processing, future prediction <cit.>, sequence classification <cit.>, and sequence labeling <cit.>. For example, Huang et al. <cit.> combined LSTM with CRF to address the long-term dependency issue in sequence labeling, with CRF establishing long-term correlations between sequences. These methods provide valuable insights for taillight recognition in time series analysis.
Despite the significant achievements of existing methods in vehicle light detection, challenges remain, particularly in generalization under varying lighting conditions and complex environments, as well as meeting real-time requirements. Future research needs to focus on enhancing model robustness, reducing computational resource consumption, and developing more precise and real-time detection algorithms. With ongoing technological advancements, we are confident that vehicle light detection technology will mature and provide strong support for the development of autonomous driving and intelligent transportation systems.
§ METHODOLOGY
§.§ Dataset
Unfortunately, there are currently few publicly available large-scale datasets for vehicle taillight recognition in real driving scenarios. Much of the work in academia relies on self-recorded driving videos for research, and these individually collected datasets are typically not open-source. Moreover, due to the varying annotation requirements of different methods, these datasets lack general applicability. We conducted a systematic review of existing datasets in the field of taillight detection, analyzing their types and characteristics. A detailed comparison can be found in Table <ref>.
Among these taillight detection datasets, the LISA dataset <cit.> focuses on localizing the taillight region using four corner coordinates to annotate the polygonal edges of the taillights. However, it does not include annotations for downstream taillight state recognition tasks. Some studies <cit.> choose to annotate the taillight states of cropped vehicle images. While these datasets often contain a large number of cropped images, they lack comprehensive scene information, making it difficult to fully validate a taillight detection pipeline. For taillight state recognition tasks, depending on the specific focus of the dataset, some datasets <cit.> only annotate brake light states, while others include both brake light and turn signal states. The latter annotations are more complex but provide richer and more comprehensive taillight state information.
Notably, for datasets that include both brake lights and turn signals, the academic community has largely classified taillight states into broad categories such as Brake-Off, Brake-On, Turn-Left, and Turn-Right. However, this classification method does not decouple turn signals from brake lights, which we believe is crucial in practical applications. In real traffic scenarios, it is common and reasonable for vehicles to have both brake lights and turn signals activated simultaneously, such as when a vehicle slows down to turn. The use of such coarse classification methods results in the loss of brake light information when turn signals are active, which is detrimental to accurately predicting other vehicles' driving intentions. Although Hsu et al. <cit.> provided an eight-state taillight classification in their Vehicle Rear Signal Dataset, the dataset only includes cropped images of vehicles, rather than full driving scenes.
To address this, we present TLD (Tail Light Dataset), our proposed solution to the current pain point in taillight datasets: the lack of a large-scale dataset with decoupled annotations of brake lights and turn signals in extended real-world driving scenarios. TLD contains over 1.7 million images, including 152,690 annotated images and over 1.5 million unlabeled images, with a total of 307,509 annotated instances. The total duration of the driving videos is 17.78 hours. To our knowledge, TLD is the first large-scale public dataset to provide decoupled annotations of turn signals and brake lights in full-frame images from real-world driving scenarios.
Our dataset is entirely derived from real driving scenes, with annotated images sourced from high-quality driving videos on YouTube and supplemented with additional annotations from the LOKI dataset. Most of TLD’s images come from YouTube driving videos. We carefully selected 21 driving videos from the extensive high-quality YouTube driving video list compiled by the autonomous driving video dataset OpenDV-2K <cit.>, covering 15.53 hours of footage from diverse global driving scenes (Figure <ref>). These videos span a wide range of common driving conditions, including day and night, various weather conditions, congested urban scenes, and suburban highway scenarios, offering significant scene diversity. To facilitate future research on time-series analysis, we extended the original videos to a 30Hz sampling rate, providing a substantial number of unlabeled images between manually annotated frames. This resulted in a total of 1.6 million frames. These unlabeled frames are valuable for semi-supervised learning, allowing models to improve generalization and performance by training on a combination of labeled and unlabeled data. They can also be used for generating pseudo-labels, which can be added to the training set to enhance model training. By combining labeled and unlabeled data, we can better capture variations and dynamic information in the video, thereby improving overall model robustness and generalization.
Another part of the dataset involves additional annotations on the LOKI dataset <cit.>, which was introduced by Honda Research Institute USA (HRI USA) in 2021. The LOKI dataset was collected in dense urban environments in downtown Tokyo and is divided into 644 scenes, each averaging 12.6 seconds in length. It already includes various agent intention labels (e.g., stopping, left turn, right turn, lane change to the left) and sensor information such as LiDAR. We manually annotated the taillight information for all visible vehicles in the LOKI dataset, with each driving scene downsampled to 5Hz for annotation, resulting in a total of 40,890 frames. We chose to annotate the LOKI dataset because we believe a dataset that includes both taillight states and agent intention labels will significantly contribute to further research on multi-agent interaction and the intrinsic connection between taillight information and driving intentions.
We further divided TLD into two versions based on their sources, with more details provided in Section 4.1.
§.§ Overview of the Taillight Detection Pipeline
Our vehicle light detection method is both straightforward and efficient. It consists of two primary modules, as shown in Figure <ref>: the vehicle detection module and the tail light state classifier. The vehicle detection module's main task is to detect and track nearby vehicles in the captured driving scenes. These identified vehicle segments are then designated as regions of interest (ROI) and passed on to the tail light state classifier. The brake light and turn signal classifiers work in parallel to analyze the sequence of tracked vehicles. The resulting sequence of tail light states for each frame is then processed by a post-processing module, which filters out potential false detections and determines the turn signal's status over a given period.
§.§ Vehicle Detection Module
The vehicle detection module aims to extract the Region of Interest (ROI) for the tail light state classifier. Its goal is to minimize interference from unrelated factors in the driving scene, such as traffic lights, street lights, and reflective barriers. To detect and track other vehicles, we use a combination of YOLOv10<cit.> and DeepSort<cit.>.
The YOLO (You Only Look Once) series models redefined object detection by mapping input images directly to bounding box coordinates and class probabilities in an end-to-end manner. This single-stage framework effectively overcomes the high computational costs associated with R-CNN and its subsequent methods, such as Fast R-CNN and Faster R-CNN. The YOLO series has become one of the most popular object detection methods in practical applications. This approach not only significantly enhances detection speed but also reduces reliance on computational resources while maintaining high accuracy, providing a clear advantage in real-time object detection tasks.
Considering the real-time requirements of due to its efficiency, excellent detection accuracy, and real-time capabilities.
For tail light detection, we chose YOLOv10 as the base model for vehicle detection. It is currently the best-performing model and we opted for the TINY version to reduce inference latency. YOLOv10 features a deeper convolutional neural network structure, incorporating the latest feature extraction techniques and efficient attention mechanisms. It also includes multi-scale feature fusion modules and adaptive anchor box generation strategies, which further improve detection accuracy and robustness. YOLOv10-TINY, the chosen model, consists of 16 convolutional layers, 5 max pooling layers, 1 upsampling layer, 1 attention mechanism layer, and 2 output layers. It uses a Feature Pyramid Network (FPN) strategy to predict bounding boxes at two different scales: 20×20 and 40×40. By optimizing the model structure and reducing the number of parameters, YOLOv10-TINY delivers satisfactory detection performance on embedded and mobile devices while maintaining high detection speed and accuracy.The vehicle tracking network's purpose is to continuously track detected vehicles' positions over time to ensure accurate capture of their dynamics. We employed the DeepSort algorithm (Simple Online and Realtime Tracking with a Deep Association Metric) for this module. Traditional SORT algorithms (Simple Online and Realtime Tracking) use Kalman filters to predict target positions in the future and the Hungarian algorithm for data association. However, SORT relies solely on target motion information, which can lead to errors in crowded scenes or when targets are occluded. DeepSort improves SORT by introducing a deep learning-based appearance feature extraction module. This module uses convolutional neural networks to extract feature vectors for each target, supplementing the Kalman filter and Hungarian algorithm to enhance data association accuracy and robustness.
Specifically, the vehicle tracker workflow involves the following main steps: First, the YOLOv10 model detects vehicles in video frames, generating target detection results called "detections." Each detection typically includes information about the target, such as bounding box coordinates and confidence scores. Next, a pre-trained deep convolutional neural network extracts feature vectors from each detected target region. For Confirmed Tracks Predict, the Kalman filter is applied to the next frame to estimate the new position and velocity of the tracks, which are then associated with the detections in the current frame. During the association step, the Mahalanobis distance (equation <ref>) between detection targets and tracking targets is computed, where z is the observed target state (detection target), x̂ is the predicted state of the tracking target, and S is the state covariance matrix. If the Mahalanobis distance is below a specified threshold, they are matched as the same target. However, since Mahalanobis distance can struggle with occlusion, DeepSort also employs cosine distance for appearance similarity. A ReID model extracts feature vectors for different objects, and a cosine distance cost function computes the similarity between the predicted and detected objects. If the bounding boxes are close and the features are similar, they are matched as the same target. Once data association is complete, DeepSort updates the state of each tracked target, including position, velocity, and feature vectors. For newly appearing vehicles, DeepSort initializes a new Kalman filter and begins tracking them. For vehicles that remain unmatched for a long time, the system removes the corresponding tracker to reduce computational resource consumption.
D_Mahalanobis = (z - x̂)^⊤ S^-1 (z - x̂)
In summary, our vehicle detection module combines YOLOv10 with DeepSort to achieve efficient detection and accurate tracking of vehicles in complex driving scenes. This combination not only enhances the system's detection accuracy and tracking stability but also ensures high computational efficiency, thus meeting the reliability requirements of the tail light detection pipeline.
§.§ Tail Light State Classifier
In the tail light state classification module, there are two components: two parallel tail light classifiers and a subsequent temporal post-processing module.
Firstly, the tracking sequences of vehicles obtained from the first module are input in parallel to the classifiers trained on our dataset. Our classifier uses ResNet34 as the backbone because it has relatively shallow layers and lower computational cost, making it suitable for applications requiring fast inference. The network is divided into four stages, each containing convolutional layers that extract features at different levels from the input data. We choose to extract features from the fourth stage of the network to leverage higher-level feature information for classification. The neck uses a Global Average Pooling layer to average the spatial dimensions (width and height) of each channel in the feature map into a single value, producing a one-dimensional feature vector. This method effectively reduces the feature dimension while retaining global contextual information, allowing the model to make more accurate decisions in classification tasks. Finally, the Linear Classification Head maps the processed feature vector to category labels, yielding classification results for brake lights or turn signals in each frame.
At this stage, the classification results alone are not sufficient as output. In our detection system, each frame of the vehicle sequence processed by the turn signal classifier yields discrete turn signal state labels, such as "left" or "off." This means that when the left or right turn signals flash, the off state during the off moment will be classified as "off" by the single-frame classifier, leading to an alternating output of turn signal on and off. In reality, this off state should be categorized as either "left" or "right," and the discrete single-frame state is not ideal for downstream tasks that require understanding vehicle driving intentions. Additionally, potential misclassified results can introduce noise, affecting the overall judgment of the tail light state, as shown in Figure <ref>. Therefore, an effective post-processing module is needed to consolidate these discrete results into continuous states for more accurate turn signal status determination.
The proposed temporal post-processing module evaluates the discrete classification results from the classifier based on a time threshold. It then outputs the continuous tail light status over time. This module is essential for improving the reliability and accuracy of the system because it prevents single-frame detection errors from impacting the final output. When dealing with brake lights, which have simpler state transitions, a continuous state can be represented by maintaining the same state for a period of time. However, for turn signals that flash on and off, we need to capture both states. Typically, most vehicles have turn signal flash frequencies of 1-2 Hz. To accommodate this, we set activation and deactivation thresholds for turn signals. These thresholds are adjustable based on actual conditions.For example, if a detector samples at 20 Hz and a turn signal is flashing at 1 Hz, each flashing cycle lasts approximately 0.5 seconds. We set the activation threshold to 0.1 seconds. This means that if the turn signal "left" state lasts more than 0.1 seconds(equivalent to 2 frames), it is considered active. This threshold allows us to capture instantaneous changes in turn signal status while minimizing the impact of brief noise.To determine the deactivation status of the turn signal, we set a longer time threshold. If the turn signal "off" state lasts more than 0.6 seconds (i.e., 12 frames), it is considered deactivated. This threshold helps filter out brief state changes and prevents misjudgment.By using the time threshold post-processing module described above, we can output the continuous status of the tail light over time and improve the system's robustness to a certain extent.
§ EXPERIMENTS
In this section, we will first introduce our new dataset, TLD, and then evaluate the performance of our algorithm on this dataset, including both the vehicle detection module and the tail light classification module.
§.§ Dataset Details
In this section, we will first introduce our new dataset, TLD, and then evaluate the performance of our algorithm on this dataset. This evaluation includes both the vehicle detection module and the tail light classification module.
Based on the video sources and the richness of the annotations, we have divided the dataset into two subsets: TLD-YT and TLD-LOKI. TLD-YT includes data from YouTube videos, while TLD-LOKI consists of annotations from the LOKI dataset.
For the YouTube video subset, we first preprocessed these driving videos by removing the first 90 seconds and the last 30 seconds. This was done to filter out unrelated content such as intros and subscription prompts. All driving videos were then annotated at a frequency of 2 Hz, resulting in a total of 111,800 frames. Each image was manually annotated for both brake light and turn signal states. In TLD-YT, we performed decoupled annotations for brake lights and turn signals. This was done to ensure that the vehicle's lighting information is comprehensive and realistic. Additionally, since tail light information is just one of the vehicle's status attributes, all tail light annotations are mapped back to their respective vehicles. Each tail light is matched to its associated vehicle in order to accurately predict the vehicle's subsequent actions. Therefore, we did not annotate the specific locations of the tail lights, but rather annotated the 2D bounding boxes of the vehicles in the real scenes. The brake light states were manually determined for each vehicle and assigned as the label for that object. Subsequently, we annotated the turn signal states with attributes including off, left, right, both, and unknown. It is worth noting that most existing car tail light datasets do not include annotations for hazard lights (both left and right turn signals on simultaneously). We consider hazard lights an important aspect of road safety, and thus annotated these cases in our dataset. To our knowledge, we are the first to include hazard light annotations in a tail light detection dataset.
In TLD-LOKI, we first filtered the Vehicle objects in the original LOKI dataset. We retained only those that were not severely occluded and visible in the field of view. We then updated the labels to four categories: off, brake, left, and right. This resulted in 40,890 annotated frames, 67,253 instances, and 644 scenarios. With the existing vehicle intention labels and additional sensor information from the LOKI dataset, we believe TLD-LOKI will provide significant value for future research.
§.§ Experiment for Taillight Classification
Experimental Settings.
We conducted experiments to evaluate the performance of tail light detection and recognition using our TLD dataset. During the training process, we started by cropping images of annotated vehicles from the labeled dataset to obtain crop images of different tail light states. These images were then organized based on their annotations. Since brake lights and turn signals were annotated separately in our dataset, we ended up with two distinct datasets: one for brake lights and one for turn signals. The brake light dataset was derived from images in TLD-YT, while the turn signal dataset was obtained from images in TLD-Full. Both datasets were randomly divided into training and testing sets in an 8:2 ratio. Data augmentation involved resizing and normalization only. To address potential overfitting caused by class imbalance in the dataset, we utilized Focal Loss with the following parameters: alpha=1, gamma=2, and loss-weight=1.0. Each method was trained for 100 epochs, and we tested various classification methods on the testing set. To comprehensively assess the performance of the classification model, especially for datasets with class imbalance or multi-label classification like the tail light dataset, we used performance metrics such as the F1 Score, which is the harmonic mean of Precision and Recall.
State-of-the-Art Methods.We compared our two taillight state classifiers with several widely used classification network baselines, including MobileNetV3-Small/Large, EfficientNetV2-B0/S, ResNet34/50/101, and Mobile-ViT-S.
MobileNetV3 is a well-known lightweight classification network. In our experiments, we evaluated both the MobileNetV3 Small and MobileNetV3 Large versions. MobileNetV3 achieves efficient computational performance on mobile and embedded devices through hardware-aware model design and network search techniques. EfficientNetV2 is another efficient classification network that optimizes model depth, width, and resolution through a compound scaling method. This method further reduces computational resource requirements while improving efficiency and accuracy. These networks excel in lightweight and efficient performance, making them suitable for resource-constrained environments.
The ResNet series is a classic architecture in deep learning that addresses the gradient vanishing problem in deep networks by introducing residual connections. This significantly improves training performance for deep neural networks. The ResNet series includes various versions such as ResNet34, ResNet50, and ResNet101, each differing in depth. ResNet34 consists of basic residual blocks and is suited for shallower tasks. ResNet50 and ResNet101 use bottleneck residual blocks, capturing more feature information through deeper layers while maintaining computational efficiency. This enhances the model's expressiveness. Various ResNet variants have demonstrated excellent performance across different tasks.
Mobile-ViT S is a compact network based on the Vision Transformer (ViT) architecture, which combines convolutional neural networks (CNNs) and self-attention mechanisms. It consists of both CNN and ViT components. Mobile-ViT uses a lightweight CNN in the initial layers to extract low-level features, taking inspiration from MobileNet's structure, which reduces computational and parameter load. After the CNN part, Mobile-ViT incorporates a Vision Transformer module to capture global image features. The ViT processes image patches using self-attention, focusing on relationships between different positions in the image. This effectively captures long-range dependencies and achieves superior performance in classification tasks.
Experimental results.
The experimental results for the brake light and turn signal classification tasks are presented in Tables <ref> and <ref>. We observed that ResNet networks achieved better performance in both classification tasks. Specifically, ResNet34 achieved an F1 Score of 96.84 in the brake light classification task and an F1 Score of 86.82 in the turn signal classification task. Additionally, we made the following observations:
For both the brake light and turn signal detection tasks, deeper ResNet architectures do not necessarily result in better performance. In fact, performance metrics tend to degrade to some extent. In the brake light classification task, ResNet101 exhibited a decrease in Top-1 Accuracy by 0.38, Mean Precision by 0.29, and the F1 Score dropped from 96.84 with ResNet34 to 96.35 with ResNet101, a decrease of 0.49. In the turn signal detection task, the detection accuracy is lower compared to the brake light task, and this disparity worsens with increased network depth. As shown in Table <ref>, Top-1 Accuracy decreased from 98.63 to 97.99, a drop of 0.64; Mean Precision decreased from 91.69 to 86.21, a drop of 5.48; Mean Recall decreased from 82.75 to 75.50; and Mean F1 Score decreased from 86.82 to 80.29, a drop of 6.53. The specific metrics for each category in the turn signal task in Table <ref> reveal that the main difference between ResNet34 and ResNet101 lies in the detection accuracy of left, right, and dual-flash signals. This may be due to the fact that turn signal detection involves four classification labels, and the label distribution in the dataset is imbalanced. The majority of labels belong to the "Off" category, and the imbalance in multi-class classification poses a challenge for turn signal detection. ResNet101, being deeper with more layers and parameters, has stronger learning capacity. However, with the majority of instances belonging to the "Off" category, it is more prone to overfitting and outputs more "Off" classifications, which reduces the detection accuracy for left, right, and dual-flash signals.
This phenomenon is not unique to ResNet networks; similar trends are observed with other networks listed in the table: larger networks often perform worse than smaller networks. This effect is particularly pronounced with MobileNetV3-large. As shown in the specific metrics for turn signal detection tasks (Table <ref>), MobileNetV3-large shows a score of 0 for all other categories, indicating severe overfitting to the "Off" category, with the network classifying all inputs as "Off."
§ CONCLUSION AND FUTURE WORK
In this work, we present the first large-scale dataset for tail light detection. The dataset includes separate annotations for brake lights and turn signals. It consists of a total of 152,690 images, which capture driving scenes in different times, weather conditions, and countries. To establish a baseline on our dataset, we designed a two-stage method. First, we track vehicles in the driving scenes. Then, we classify the tail light states in the tracked image sequences. We also implemented a post-processing step to determine the tail light states over a certain time sequence. However, our method currently lacks temporal judgment, which affects the classification accuracy. In the future, we plan to introduce frame-to-frame comparisons to determine whether the tail light has brightened or dimmed. We believe that our work represents a significant advancement towards ensuring fail-safe control of self-driving vehicles.
unsrt
|
http://arxiv.org/abs/2409.03454v1 | 20240905120638 | How Much Data is Enough Data? Fine-Tuning Large Language Models for In-House Translation: Performance Evaluation Across Multiple Dataset Sizes | [
"Inacio Vieira",
"Will Allred",
"Seamus Lankford",
"Sheila Castilho Monteiro De Sousa",
"Andy Way"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
xxxxx-xxx201545-character paper description goes hereAuthor(s) initials and last name go here
How Much Data is Enough Data? Fine-Tuning Large Language Models for In-House Translation: Performance Evaluation Across Multiple Dataset Sizes
Changfei Fu, Weinan Chen, Wenjun Xu, and Hong Zhang*
Received Month dd, yyyy; accepted Month dd, yyyy
==============================================================================================================================================
empty
§ ABSTRACT
Decoder-only LLMs have shown impressive performance in MT due to their ability to learn from extensive datasets and generate high-quality translations. However, LLMs often struggle with the nuances and style required for organisation-specific translation. In this study, we explore the effectiveness of fine-tuning Large Language Models (LLMs), particularly Llama 3 8B Instruct, leveraging translation memories (TMs), as a valuable resource to enhance accuracy and efficiency.
We investigate the impact of fine-tuning the Llama 3 model using TMs from a specific organisation in the software sector. Our experiments cover five translation directions across languages of varying resource levels (English to Brazilian Portuguese, Czech, German, Finnish, and Korean). We analyse diverse sizes of training datasets (1k to 207k segments) to evaluate their influence on translation quality. We fine-tune separate models for each training set and evaluate their performance based on automatic metrics, BLEU, chrF++, TER, and COMET.
Our findings reveal improvement in translation performance with larger datasets across all metrics. On average, BLEU and COMET scores increase by 13 and 25 points, respectively, on the largest training set against the baseline model. Notably, there is a performance deterioration in comparison with the baseline model when fine-tuning on only 1k and 2k examples; however, we observe a substantial improvement as the training dataset size increases. The study highlights the potential of integrating TMs with LLMs to create bespoke translation models tailored to the specific needs of businesses, thus enhancing translation quality and reducing turn-around times. This approach offers a valuable insight for organisations seeking to leverage TMs and LLMs for optimal translation outcomes, especially in narrower domains.
2
§ INTRODUCTION
In recent years, decoder-only large language models (LLMs) have revolutionised the machine translation (MT) field due to their ability to learn from vast amounts of data and generate high-quality translations <cit.>. LLMs, such as Llama 3 8B Instruct,*These authors contributed equally to this workfootnote[<https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct>] have shown impressive capabilities in adapting to translation tasks, generating human-like accurate output, making them invaluable tools for the sector <cit.>. However, out-of-the-box LLMs do not always capture all the nuances, appropriate tone, and terminology required for specialised or organisation-specific translations <cit.>. This is where translation memories (TMs) offer a potential solution.
A TM is a database that stores previously human-translated segments and their respective translations. They are particularly useful to language service providers (LSPs) as they deal with repetitive content and organisation-specific style and terminology, enhancing the efficiency and accuracy of translations <cit.>. Therefore, the integration of TMs and LLMs can create models that better understand organisational requirements and lead to higher quality outputs and reduced turnaround times. However, this approach depends on several factors, like the amount, quality and specificity of the TMs used as training data for fine-tuning.
Previous work explored fine-tuning of models with TM for translation for specific domains and the benefit that offers to performance <cit.>. Accordingly, TM provides much value because of its high quality and domain relevance <cit.>. This research highlights the gains available when leveraging existing TMs during the fine-tuning process of LLMs.
In this paper, we investigate a real-life scenario where we fine-tune Llama 3 8B Instruct <cit.> using TMs from a specific organisation. Additionally, since increasing the fine-tuning data requires dedicating more resources and time, we explore different dataset sizes to evaluate their impact on translation quality and identify the most efficient return on investment. We conduct experiments in five translation directions (from English) on languages of varying resource level (Brazilian Portuguese (PT-BR), Czech (CS), German (DE), Finnish (FI), and Korean (KO)). This approach can lead to bespoke translation models that cater to the unique needs of different companies when compared to generic LLMs.
§ METHODOLOGY
§.§ Data
The raw dataset consists of TMs from an anonymous organisation that operates in the software sector. The three datasets employed cover knowledge base, mobile user interface, and mobile reference materials.
The five target languages dataset (PT-BR, CS, DE, FI, and KO) are filtered to remove duplicates, source-copies, and segments over 150 words to ensure none would go over the maximum length set during training. All HTML tags are removed, and double spaces are converted to single spaces. Any rows containing only dates, version numbers, or any programming language are also removed. Rows are then randomly shuffled to mitigate any temporal bias that could arise from the chronological order of the data, ensure the model does not memorise sequences, and prevent the evaluation set from being biased towards a particular section of the data.
The dataset is then transformed into an inter-lingual aligned dataset for all five target languages where any rows with missing translations for any target languages, are dropped. This results in a dataset where all source segments have translations available in all five target languages. The dataset is then split into training, development, and test sets, as shown in .
Further filtering is applied to the test set removing segments that had over 75% similarity with any segments in the training dataset to ensure robust testing and minimal memorisation. We measure similarity as a combination of the Levenshtein distance <cit.> and a 5-gram-based similarity <cit.>. This reduced the size of the test split from 1837 to 1353. The test split with under 75% similarity was used for all experiments.
In the interest of using all the data available, we also compile all segments in a given language into a dataset for each target language. This includes any segment that would not fit the inter-lingual alignment criteria applied above. This will now be referred to as the ‘full dataset’. These larger training sets allow us to train beyond the 14.7k aligned segments and make use of the total volume of available segments in order to explore what impact that would have on results. The full training sets range from 107k (CS) to 223k (DE) examples, as shown in Table <ref>.
§.§ Model
We use the Llama 3 8B Instruct model and its associated okenizer <cit.>. The decision between the Instruct and the base model is based on an extensive MT evaluation of Llama 3 models <cit.> using the Flores-200[<github.com/facebookresearch/flores/blob/main/flores200/README.md>] dataset <cit.>. Even though <cit.> dealt with the opposite language direction (X to English), we consider the close results between Instruct and the base model involving the five languages included in this paper to be a good indicator of proximity in performance between the models.
Our baseline consists of the test set metric results obtained from the out-of-the-box Llama 3 8B Instruct model.
We use QLoRA <cit.> for efficient fine-tuning with 4-bit quantisation using Hugging Face Transformers. We perform fine-tuning on a high performance cluster with four A100-SXM4-80GB GPUs. From Hugging Face, we leverage the Supervised Fine-Tuning Training (SFTTrainer),[<https://huggingface.co./docs/trl/en/sft_trainer>] which is a wrapper of the Trainer class[<https://huggingface.co./docs/transformers/en/main_classes/trainer>] optimized for fine-tuning language models like Llama. On the largest dataset size, fine-tuning takes approximately 2.3 hours (Appendix <ref>).
§.§ Inference
§.§.§ Prompting
At inference time, we use many of the recommended parameters from previous work <cit.> and model documentation to produce translation outputs from the baseline model and the fine-tuned versions (cf. Appendix <ref>). Meta’s Llama 3 documentation[<https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/>] provides a recommended prompt format and instructions to implement special tokens during inference and training <cit.>.
The prompt and the source segment were passed to the model for inference to obtain each translation. This constitutes zero-shot as it did not include examples in the prompt <cit.>. A JSON scheme ({“translation”: “string”}) was also added to the prompt in order to obtain a structured output <cit.>. During training, the same format was applied with the addition of the specific EOS token () as recommended by Meta’s documentation (cf. Appendix <ref>).
§.§.§ Translation
In order to obtain higher efficiency, both baseline and fine-tuned models are converted to the CTranslate2[<https://github.com/OpenNMT/CTranslate2>] <cit.> format (with 8-bit quantisation) and provided with parameters for inference (cf. Appendix <ref>).
§.§.§ Stopping Criteria and Post-processing
In early experiments, we observe frequent instances of overgeneration; an issue recently explored further by <cit.>. By using "}assistant" as a stop_token in our stopping_criteria, we find much less post-processing is required in order to obtain the pure translation.
Our post-processing consists of extracting the translation by removing the `{“translation”: “ ' prefix and the trailing ` ”} '. The newline characters are replaced by spaces. On some occasions, especially in the models produced by the smaller training datasets (1k and 2k examples), further cleaning is required as the model inadvertently overgenerated some HTML tags like ‘<br>’ and ‘<p>’. This is important to properly assess the translation quality.
§.§ Evaluation
To evaluate the performance of our models, we report BLEU <cit.>, chrF++ <cit.>, TER <cit.> via sacreBLEU,[<https://github.com/mjpost/sacrebleu>] and COMET[wmt20-comet-da, <https://github.com/Unbabel/COMET>] <cit.>. We use multiple metrics to make our experiments more comparable to a wider variety of work and to provide insight into certain aspects of performance.
It is important to note that the experiment aims to show the training efficiency of the PEFT fine-tuning method and its ability to approximate the model’s translating capabilities to the training material. Therefore, we pay special attention to the automatic metrics measuring n-gram differences and edits (BLEU, chrF++, TER) whilst still considering the quality estimation aspect of COMET as a means of comparing inter-source languages and other similar research. Our results are compared to those obtained from the baseline model, an out-of-the-box Llama 3 8B Instruct model, and to GPT 3.5. We also ask five professional translators to post-edit 100 translations from the best-performing model into their language pair. They also answer a questionnaire about the quality of the automatically translated segments. The questionnaire asks for comments on the quality of the translations.
§ RESULTS AND DISCUSSION
The results in Table <ref> show an increase in performance across all the languages for all datasets with more than 5k segments compared to the baseline. The fully aligned 14.7k dataset sees a BLEU score increase of 4.8 points or relative increase of 17.42% on average over the baseline, over all target languages, while chrF++ and COMET increases 7.1 and 16.9, respectively. Similarly, TER decreases by 9 points. The 100k+ datasets also demonstrate consistent performance gains with an average increase of 13.7 BLEU, 12.7 chrF++, and 25 COMET, while TER decreases to 15.5.
To provide a point of comparison, we evaluate the performance of GPT-3.5[<https://chat.openai.com/>] on our test set. While GPT-3.5 outperforms our highest-performing model in BLEU and chrF++ for DE and FI, the 100k+ datasets often surpass GPT-3.5 in other languages and metrics. This demonstrates the effectiveness of creating bespoke models through fine-tuning mid-sized LLMs when leveraging domain-specific data. Targeted fine-tuning can yield competitive or superior results compared to larger, general-purpose models like GPT-3.5.
§.§ Small Dataset Deterioration
Regarding translation quality across different training data sizes, we note a deterioration in quality for models trained on the smaller datasets (1k and 2k) in relation to the baseline. Despite a smooth reduction in both training and evaluation loss during training across all sizes, these smaller datasets still lead to poorer performance on all metrics. This can be due to the fact that the 1k and 2k datasets are insufficient to offer the models a wide enough variety of examples, leading to overfitting where the model performs well on training but poorly on the unseen test data <cit.>.
It is possible that the lack of diversity in the smaller models fails to capture the range of linguistic and translation nuances present in the test data which hinders the model’s ability to generalise beyond the specific examples seen during training. Furthermore, the smaller datasets may make the models more susceptible to noise, such as translation errors or inconsistencies, leading to the learning of incorrect patterns and degrading performance on the test data, affecting the automatic metrics results, while the loss continues to drop due to fitting noisy data <cit.>.
Another possible explanation for the deterioration is a decrease in training data quality in the 1k and 2k dataset sizes. To examine this, we use COMET-Kiwi <cit.>, a popular quality estimation metric, to evaluate the quality of the training data. The scores are consistent for each language with variations within a narrow range of 1-2 points (cf. Appendix <ref>). For example, FI has the highest variation with a maximum score of 79.58 (1k and 14.7K) and a minimum score of 78.12 (5k), resulting in a range of only 1.46 points. The minimal variation in score indicates consistent data quality across all dataset sizes for each language. Therefore, the deterioration in performance is unlikely to be due to a decrease in data quality for the 1k and 2k training data sizes.
Hyperparameter fine-tuning could be employed to mitigate this early deterioration in situations where only small datasets are available. This may include dropout or other regularisation techniques to prevent overfitting on small training sets. Adjustment of the learning rate, batch sizes and QLoRA hyperparameters should also be explored to deal with this specific case of deterioration <cit.>.
Overall, a different approach is required in order to obtain gains when the training data is scarce. Our experiments suggest the need for at least 5k examples to achieve an improvement in metrics under the hyper-specific domain and circumstances we explore.
The issues above seem to be mitigated on the larger sets whilst maintaining the same hyperparameters as previously reported (cf. Table <ref>). We observe performance recovery on 5k examples, overtaking the baseline model, then consistently improving over all metrics as dataset size increases, and achieving increasingly impressive results across all metrics when training on anything above the 10k sets and excelling on the 100k+ sets.
§.§ Resource Level
It is interesting to note that the performance for KO has improved after the 14.7k fine-tuning and becomes comparable to or better than the performance of the other language directions, despite the lower initial baseline score across all metrics. For instance, the COMET score for the KO baseline is 36.5 while the average for all other languages is 57.7. We find that the lower resource languages (KO being the lowest of the target languages explored) have the highest relative gains, turning around a very poor baseline across all metrics. The COMET score for KO increased to 84.3 compared to the average of 84.5 in the 100k+ datasets for PR-BR, DE, FI, and CS, resulting in KO’s comparable performance to the high resource languages, i.e. PT-BR and DE.
These results probably relate not only to the resource level of the language but also to the amount of Korean data in the Llama 3 training recipe. According to MetaAI, “over 5% of the Llama 3 pre-training dataset consists of high-quality non-English data that covers over 30 languages” <cit.>. While the Llama Team provides more detail on the training and data mix Llama 3, the exact proportion of Korean data is not discussed <cit.>. Our baseline metrics suggest that Korean does not feature highly on that list given that it scores significantly lower than all other languages. This might be attributed to the fact that there were not enough examples to produce a firm understanding of the language but enough to provide a foundation that heavily benefited from fine-tuning. As mentioned, this is an assumption as we lack sufficiently detailed information on the training recipe.
When looking at the target languages, we note that PT-BR shows the best performance at 14.7k and 100k+ dataset. This indicates that, even for a well-resourced language, the foundation model gained a strong understanding of the language during pre-training. However, it did not seem to benefit as much from fine-tuning as KO, a lower resource language. This corroborates the finding that resource level is a strong determiner of LLM MT performance <cit.>.
§.§ Human Evaluation
Regarding the human evaluation, the qualitative comments from the translators reveal that the largest model struggles with ambiguity. Evaluators mention that segments that lacked complete information needed to be completely reworked. For example, the segment, “Get basic, step-by-step instructions to learn" lacks a final object, which impacts the translation. While human translators often face and resolve such ambiguities through research or decision-making with incomplete information, the model processes segments in isolation, unable to access potentially clarifying context from adjacent segments. This limitation provides insight into the model's performance in real-world translation scenarios.
§ CONCLUSIONS
Fine-tuning on TMs has been demonstrated to enhance the performance of LLMs in MT tasks. In this paper, we investigate the relationship between automatic metric results and training set sizes to identify the optimal balance where resource investment yields the most significant improvements in translation quality. In our experiments, it has become evident that fine-tuning on training datasets whose size is larger than 5k examples returned increasingly better results in 19 out of the 20 language-training set size combinations explored.
By leveraging TMs, the model becomes more adept at recognising and reproducing previously translated segments, their style, and terminology. Furthermore, fine-tuning on TM data helps the model adapt to specialised fields.
The test and training sets used come from a much narrower corpus of data than in similar experiments that deal with wider domains, i.e. medicine <cit.>. The hyper-specific nature of the training data employed in our approach may partly explain the promising results. We therefore leverage the advantage that smaller models licensed for business-use offer; they can be adapted several times over for narrow and specific domains, as well as multiple languages with little investment, instead of aiming for a more general purpose or multilingual model. The hyper-specific purpose of our trained model, i.e. one language direction and a narrow domain, suits the size and easiness of training of an 8B parameter mode.
Being a commonly experienced scenario in the localization industry, this is an under-explored approach that organisations could be pursuing in order to make the most out of their access to TMs and LLMs for MT in order to obtain the best possible return on investment when leveraging their previously human-translated material.
Low-resource languages seem to be in a perfect position to benefit from leveraging small business-friendly models, like Llama 3 8B. The gains in automatic metric results for KO are substantially higher for high resource languages, like PT-BR and DE, returning the highest increase in performance compared to the metrics obtained from training on similar set sizes in those languages. KO observes an increase of 130% on COMET from the baseline to the 100k+ dataset, whereas the average increase amongst the other target languages is 46% (cf. Table <ref>).
It is important to mention that, just as <cit.> acknowledges the FLORES-200 dataset leakage into Llama 3, it is possible that some of our test set was also scraped by the Llama 3 models, as parts of the material were published online prior to the Llama 3 family’s pre-training. We face the same challenge as the whole AI researching community, forced to either constantly come up with new test sets or simply acknowledge the potential leakage of test data <cit.>. We urge large tech companies to disclose at a minimum the test sets that were not ingested by their models for the benefit of the whole community. We acknowledge the Llama Team's leadership in this area <cit.>.
§ FUTURE WORK
Future work in the area may benefit from the introduction of checkpoints during training and subsequent intermediate evaluation would enable the visualisation of a clearer learning curve, and the identification of potential dips in performance and points of diminishing returns. This approach would facilitate the analysis and allow for a finer and more efficient evaluation process.
In the future, we aim to obtain a bespoke test set directly from the organisation that owns the TMs. This tailored test set would consist of examples specifically designed in-house according to strict guidelines, ensuring they are completely original and reflective of the organisation's unique requirements and style. By using a bespoke and unseen test set, we can more accurately assess the performance of our fine-tuned models in a real-world context.
Finally, further investigation is required with regard to the training hyperparameters across the different dataset sizes in order to obtain better results with smaller training sets under 5k examples. Several strategies can be explored to optimise performance on smaller datasets. Adjustments such as modifying the dropout rates to prevent overfitting, applying regularisation techniques to enhance model generalisation, and fine-tuning the learning rate to ensure efficient convergence can be particularly beneficial in this case.
§ ACKNOWLEDGEMENTS
This research is supported by Science Foundation Ireland through ADAPT Centre (Grant No. 13/RC/2106) (<www.adaptcentre.ie>) at Dublin City University. We thank Alpha-CRC for their essential collaboration.
apalike
§ APPENDIX A
§ APPENDIX B
§.§ Special Token Descriptions
<|begin_of_text|>: This is equivalent to the BOS token.
<|eot_id|>: This signifies the end of the message in a turn.
<|start_header_id|>{role}<|end_header_id|>: These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant.
<|end_of_text|>: This is equivalent to the EOS token.
§.§ Prompt
<|begin_of_text|>
<|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant for translation from {source_language} to {target_language}. You MUST answer with the following JSON scheme: {“translation”: “string”}
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
{source_sentence}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
§.§ Training Prompt
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant for translation from {source_language} to {target_language}. You MUST answer with the following JSON scheme: {“translation”: “string”} <|eot_id|>
<|start_header_id|>user<|end_header_id|>
{source_sentence}<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>{target_sentence}<|end_of_text|>
§ APPENDIX C
§ APPENDIX D
|
http://arxiv.org/abs/2409.02998v1 | 20240904180009 | Constructing the Infrared Conformal Generators on the Fuzzy Sphere | [
"Giulia Fardelli",
"A. Liam Fitzpatrick",
"Emanuel Katz"
] | hep-th | [
"hep-th",
"cond-mat.stat-mech"
] |
matrix
shapes.misc, positioning,decorations.pathreplacing,angles,quotes
decorations.pathmorphing
equationsection
1.23
packed_enum
łλ
ø
Δ
gap
→
ı∞
𝒪
|
http://arxiv.org/abs/2409.03750v1 | 20240905175906 | Pion electroproduction measurements in the nucleon resonance region | [
"R. Li",
"N. Sparveris",
"H. Atac",
"M. K. Jones",
"M. Paolone",
"Z. Akbar",
"M. Ali",
"C. Ayerbe Gayoso",
"V. Berdnikov",
"D. Biswas",
"M. Boer",
"A. Camsonne",
"J. -P. Chen",
"M. Diefenthaler",
"B. Duran",
"D. Dutta",
"D. Gaskell",
"O. Hansen",
"F. Hauenstein",
"N. Heinrich",
"W. Henry",
"T. Horn",
"G. M. Huber",
"S. Jia",
"S. Joosten",
"A. Karki",
"S. J. D. Kay",
"V. Kumar",
"X. Li",
"W. B. Li",
"A. H. Liyanage",
"D. Mack",
"S. Malace",
"P. Markowitz",
"M. McCaughan",
"Z. -E. Meziani",
"H. Mkrtchyan",
"C. Morean",
"M. Muhoza",
"A. Narayan",
"B. Pasquini",
"M. Rehfuss",
"B. Sawatzky",
"G. R. Smith",
"A. Smith",
"R. Trotta",
"C. Yero",
"X. Zheng",
"J. Zhou"
] | nucl-ex | [
"nucl-ex",
"nucl-th"
] |
[temple]Temple University, Philadelphia, PA 19122, USA
[jlab]Jefferson Lab, Newport News, VA 23606, USA
[NMSU]New Mexico State University, Las Cruces, NM 88003, USA
[florida]Florida International University, University Park, Florida 33199, USA
[5]Catholic University of America , Washington, DC 20064.
[6]Hampton University , Hampton, VA 23669.
[7]Mississippi State University, Miss. State, MS 39762.
[8]The College of William and Mary, Williamsburg, VA 23185.
[9]Old Dominion University, Norfolk, VA 23529.
[10]University of Regina, Regina, SK S4S 0A2, Canada.
[11]Argonne National Laboratory, Lemont, IL 60439.
[12]Artem Alikhanian National Laboratory, Yerevan, Armenia.
[13]University of Tennessee, Knoxville, TN 37996.
[14]Veer Kunwar Singh University, Arrah, Bihar 802301, India.
[15]University of Pavia, 27100 Pavia PV, Italy.
[16]Duke University, Durham, NC 27708.
[17]University of Virginia, Charlottesville, VA, 22904.
[18]INFN, 27100 Pavia (PV), Italy.
[19]Virginia Polytechnic Institute & State University, Blacksburg, Virginia 24061, USA.
temple]R. Li
temple]N. Sparveriscor1
[email protected] [cor1]Corresponding author.
temple]H. Atac
jlab]M. K. Jones
NMSU]M. Paolone
17]Z. Akbar
NMSU]M. Ali
8]C. Ayerbe Gayoso
5]V. Berdnikov
6,19]D.Biswas
19]M. Boer
jlab]A. Camsonne
jlab]J. -P. Chen
jlab]M. Diefenthaler
temple]B. Duran
7]D. Dutta
jlab]D. Gaskell
jlab]O. Hansen
9]F. Hauenstein
10]N. Heinrich
jlab]W. Henry
5]T. Horn
10]G.M. Huber
temple]S. Jia
11]S. Joosten
7]A. Karki
10]S.J.D. Kay
10]V. Kumar
16]X. Li
8]W.B. Li
6]A. H. Liyanage
jlab]D. Mack
jlab]S. Malace
florida]P. Markowitz
jlab]M. McCaughan
11]Z.-E. Meziani
12]H. Mkrtchyan
13]C. Morean
5]M. Muhoza
14]A. Narayan
15,18]B. Pasquini
temple]M. Rehfuss
jlab]B. Sawatzky
jlab]G.R. Smith
16]A. Smith
5]R. Trotta
florida]C. Yero
17]X. Zheng
16]J. Zhou
§ ABSTRACT
We report new pion electroproduction measurements in the Δ(1232) resonance, utilizing the SHMS - HMS magnetic spectrometers of Hall C at Jefferson Lab. The data focus on a region that exhibits a strong and rapidly changing interplay of the mesonic cloud and quark-gluon dynamics in the nucleon. The results are in reasonable agreement with models that employ pion cloud effects and chiral effective field theory calculations, but at the same time they suggest that an improvement is required to the theoretical calculations and provide valuable input that will
allow their refinements. The data illustrate the potential of the magnetic spectrometers setup in Hall C towards the study the Δ(1232) resonance. These first reported results will be followed by a series of measurements in Hall C, that will expand the studies of the Δ(1232) resonance offering a high precision insight within a wide kinematic range from low to high momentum transfers.
13.60.Fz Transition Form Factors
§ INTRODUCTION
The first excited state of the nucleon dominates many nuclear phenomena at energies above the pion-production threshold and holds a central role in the physics of the strong interaction. The study of the N→Δ transition form factors (TFFs) has allowed an in-depth exploration of various aspects of the nucleonic structure. Among the early interests in these measurements, one finds the effort to decode the complex quark-gluon and meson cloud dynamics of hadrons that give rise to non-spherical components in their wavefunction, that in a classical limit and at large wavelengths will correspond to a “deformation" <cit.>. For the proton, the only stable hadron, the vanishing of the spectroscopic quadrupole moment, due to its spin 1/2 nature, precludes access to the most direct observable of deformation. As a result, the presence of the resonant quadrupole amplitudes E^3/2_1+ and S^3/2_1+ (or E2 and C2 photon absorption multipoles respectively) in the predominantly magnetic dipole M^3/2_1+ (or M1) γ^* N→Δ transition emerged as the experimental signature for such an effect <cit.>.
The relative strength of the E2 and C2 amplitudes is normally quoted in terms of their ratio to the dominant magnetic dipole, namely through the EMR and CMR ratio, respectively.
The TFFs have been explored up to four momentum transfer squared Q^2=6 (GeV/c)^2 <cit.>. The results have been found in reasonable agreement with models invoking the presence of non-spherical components in the nucleon wavefunction. Under the prism of the constituent-quark picture of hadrons, these
amplitudes are a consequence of the non-central, color-hyperfine
interaction among quarks <cit.>. Nevertheless, this mechanism provides only a fraction of the
observed signal at low momentum transfers. The predicted quadrupole amplitudes <cit.> are an order of magnitude smaller compared to the the experimental results and the dominant magnetic dipole amplitude comes ≈ 30% short of the experimental measurements. The source for these dynamical
shortcomings can be traced to the fact that such quark models do not respect chiral
symmetry, whose spontaneous breaking leads to strong emission of
virtual pions (Nambu-Goldstone Bosons) <cit.>. These couple to
nucleons as σ⃗·p⃗ where σ⃗ is the
nucleon spin, and p⃗ is the pion momentum. The coupling is
strong in the p wave and mixes in non-zero angular momentum
components. Based on this, it is physically reasonable to expect
that the pionic contributions increase the M1 and dominate the E2
and C2 transition matrix elements at low Q^2. This has been indicated with the inclusion of pionic effects to quark
models <cit.>, in pion
cloud model calculations <cit.>, and recently
demonstrated in chiral Effective Field Theory (χEFT) calculations
<cit.>.
The χEFT provides a firm theoretical framework at low scales, with
the relevant symmetries of QCD built in consistently. A challenge for the
N to Δ transition involves the interplay of
two light mass scales, the pion mass and the N - Δ mass difference.
Studies to consider these two mass scales have been performed
within the framework of heavy-baryon chiral perturbation theory <cit.>,
the “ϵ-expansion” scheme <cit.> where
the two pion mass and the Δ-resonance excitation energy scales
are counted as being of the same order, and
the “δ-expansion” scheme <cit.> that provides
an energy-dependent power-counting scheme that takes into
account the large variation of the Δ-resonance contributions
with energy, and treats the two light scales ϵ and δ
on a different footing, counting ϵ∼δ^2,
the closest integer-power relation
between these parameters in the real world.
The direct path to calculate the N to Δ
transition form factors starting from the underlying theory of QCD is provided by Lattice QCD (LQCD).
The LQCD calculations <cit.> have been performed so far with pion mass down to ∼ 300 MeV, where the Δ is still stable.
These results tend to somewhat underestimate the M1, similarly to what has been observed in results for the nucleon EM form factors.
The LQCD results for EMR and CMR ratios on the other hand exhibit remarkable agreement with the experimental measurements, indicating that the ratios are much less affected by lattice artifacts than each of the quantities separately. The statistical uncertainties of the early LQCD results for the two ratios are relatively large due to the fact that the quadrupole amplitudes are sub-dominant and challenging to determine. Progress in recent years enables LQCD calculations to be conducted with physical pion mass, and with statistical uncertainties that are comparable to the experimental ones. Such efforts are currently ongoing, thus making the need for new experimental measurements timely and important. A nice feature of the Lattice QCD calculations is that they have the ability to offer valuable geometrical insight to the nucleon, as illustrated e.g.
through calculations of the three-dimensional contour plot of the Δ^+ <cit.> and of the Δ^+ quark transverse charge density <cit.>.
§ THE EXPERIMENTAL MEASUREMENTS
The reported data were acquired in Hall C of Jefferson Lab during the E12-15-001 experiment.
For the measurement of the ep→epπ^∘ reaction, electrons with energies of 4.56 GeV at a beam current up to 20 μ A were produced by Jefferson Lab’s Continuous Electron Beam Accelerator Facility (CEBAF). The electrons were scattered from a 10 cm long liquid-hydrogen target at a temperature of 19 K. The thickness of the aluminum target cell at the entrance and exit is 0.150 (11) mm and 0.191 (19) mm, respectively.
For every kinematical setting, data were taken with a target made of two aluminum foils located at the positions of the cryotarget entrance
and exit windows, each having a thickness of 0.6463(10) mm, in order to subtract the background contributions emerging from the target walls by scaling the thicknesses of the two targets.
The scattered electron and recoil proton of the reaction are detected with two magnetic spectrometers, in coincidence.
The outgoing pion is identified through the reconstructed missing mass spectrum.
The polar angle θ_γ^*π of the reaction is defined as the center-of-mass (c.m.) polar angle of the pion with respect to the momentum transfer direction.
The azimuthal angle of the reaction ϕ_γ^*π defines the angle between the plane of the two (incoming and scattered) electrons and the pion-proton plane.
The four-momentum of the outgoing pion, denoted by 𝐪', is reconstructed as 𝐪'=𝐤+𝐩-𝐤'-𝐩', where 𝐤 and 𝐩 are the four-momenta of the incoming electron and the target proton, while 𝐤' and 𝐩' are the four-momenta of the final electron and proton, respectively. The four-momentum of the virtual photon is 𝐪=𝐤-𝐤', with Q^2 ≡-𝐪^2.
The beam properties were monitored throughout the experiment with the Hall C beam diagnostic elements. The beam position monitors (BPMs), that consist of a 4-wire antenna array of open ended thin wire striplines tuned to the RF frequency of the beam, were used to determine the position and the direction of the beam on the experimental target point. The beam current monitors (BCMs), a set of resonant-cavity based beam-current monitors and a parametric current transformer monitor, were used for the continuous non-intercepting beam current measurements. The beam size was measured by using harp scanners, which moved a thin wire through the beam. The beam was rastered over a 2×2 mm^2 area to avoid overheating the target. The beam energy was determined with an uncertainty of 0.06% by measuring the bend angle of the beam, on its way into Hall C, as it traversed the Hall C arc dipole magnets. The total accumulated beam charge was determined with 0.5% uncertainty. The liquid-hydrogen target density receives contributions from both the target temperature and target boiling effects. The density of the liquid hydrogen target has a nearly linear dependence on the temperature. The temperature is 19 K ± 0.03 K (intrinsic electronics noise) ±0.05 K (systematic uncertainty), resulting to a target density of 0.0725±0.0003 g/cm^3. For the target boiling effects, a correction was applied to account for the change in the target density caused by beam heating, contributing a density fluctuation of 0.7% at the maximum current of 20 μ A used in the experiment. The target length is measured to be 100 ± 0.26 mm thus resulting to a 0.26% uncertainty to the cross section measurement.
Two magnetic spectrometers, the Super High Momentum Spectrometer (SHMS) and the High Momentum Spectrometer (HMS) were used to detect, in coincidence, the scattered electrons and recoil protons, respectively. Both spectrometers involve a series of superconducting magnets, including quadrupoles and dipoles, followed by a set of particle detectors.
The dipole magnets deflect charged particles vertically as they enter the detector huts, while the quadrupole magnets optimize the flux of the charged particles entering the dipole magnet and focus the orbits of the charged particles into the detector huts. The two spectrometers are equipped with similar detector packages, with some differentiation due to the different momentum ranges of the spectrometers. The SHMS is also equipped with a Pb-glass calorimeter that can serve as a particle identification detector. A pair of drift chambers, each with 6 wire planes, separated by about a meter was used to provide the tracking of the detected particles. The uncertainty in the determination of the tracking efficiency was 0.5% and 1% for the SHMS and the HMS, respectively.
A set of hodoscope planes was used to form the trigger and to
provide time-of-flight information. The time-of-flight in the HMS spectrometer was used for the proton identification, providing a > 20 ns separation from kaons and pions.
The trigger efficiency of both spectrometer arms is at the 99.9% level and comes with a ±0.1% uncertainty.
For the correction due to the proton absorption in the spectrometer, elastic hydrogen data was taken to determine the fractional loss of protons due to inelastic collisions with material as the proton travelled from the target to the focal plane hodoscope. The fractional loss was determined with an uncertainty of 0.20%. This correction was applied to the data and the error was included in the systematic uncertainty of the measurement.
The particle tracks are traced, through the spectrometer optics, to the target to provide the particle momentum, scattering angle and target position information. Both spectrometers offer a better than 0.1% momentum resolution and an angular resolution of ∼ 1 mrad. The determination of the scattering angle for the SHMS and the HMS spectrometers comes with a 0.5 mrad uncertainty that is determined from constraints on the elastic kinematic reconstruction.
The coincidence time was determined as the difference in the time-of-flight between the two spectrometers, accounting for path-length variation corrections from the central trajectory and for the individual start-times. The experimental setup provided a better than 1 ns (FWHM) resolution in the coincidence timing spectrum that was measured within an 80 ns timing window. Random coincidences were subtracted using the accidental bands of the coincidence time spectrum. The uncertainty to the live-time correction, that accounts for the electronics and computer dead-time, ranged between 0.3% and 0.6% for the different kinematic settings of the experiment. To estimate the systematic error on this correction, we used the standard deviation of the Gaussian fit to the histogram of the deadtime of the runs used in each kinematic setting. The duration of each run was typically about half an hour of beam time, and the number of runs per kinematic setting ranged from about 50 to 100.
The events of the exclusive reaction ep→epπ^∘ were identified from the missing-mass reconstruction, through a selection cut around the photon peak in the missing-mass-squared spectrum. The true momentum settings of the two spectrometers were determined based on a cross-calibration method that utilizes pairs of azimuthal asymmetry measurements. Here, the momentum and position of the electron spectrometer remain the same between the two kinematical settings. The momentum setting for the proton spectrometer also remains constant, while the proton spectrometer is re-positioned symmetrically with respect to the momentum transfer direction. Since the two kinematical settings involve identical momentum settings for each of the two spectrometers, the determination of their absolute momentum settings comes from a unique solution for both kinematics, that simultaneously calibrates the reconstructed missing mass peak to the physical value of the pion mass. Following the above procedure, the correction between the set and the true values in the central momentum of the two spectrometers was determined to be smaller than 0.1%.
To determine the stability over time as well as the proper normalization, elastic scattering measurements with a proton target were performed throughout the experiment. The results are stable and consistent, within the experimental uncertainties, with the world elastic data. This points out to a consistency in the control of luminosity, target density and beam position, along with the ability to position the spectrometers reliably in the experimental hall and to consistently set and control their central momenta.
§ RESULTS AND DISCUSSION
The five-fold differential cross section for the
p(e,e'p)π^0 reaction is written as a sum of two-fold
differential cross sections with an explicit ϕ^*
dependence as
d^5σ/dΩ_e dΩ^*_π dω = Γ (σ_T + ϵσ_L + v_LTσ_LTcosϕ_π q^*
+ ϵσ_TTcos 2ϕ_π q^*)
where ϕ_π q^* is the pion center of mass azimuthal angle with
respect to the electron scattering plane, v_LT=√(2ϵ(1+ϵ)), ϵ is the transverse polarization of the virtual photon,
and Γ is the virtual photon flux.
The differential cross sections
(σ_T,σ_L,σ_LT,σ_TT)
are all functions of the
center of mass energy W, the four momentum transfer squared Q^2,
and the pion center of mass polar angle θ_π q^* and they are bilinear combinations
of the multipoles. The E2 and C2 amplitudes manifest themselves mostly
through the interference with the dominant dipole (M1) amplitude.
The longitudinal-transverse (LT) response is sensitive to
the C2 amplitude through the interference of the C2 amplitude
with the M1, while the transverse-transverse (TT) response is
sensitive to the E2 amplitude through the interference of the
E2 amplitude with the M1. The σ_T + ϵσ_L partial cross section is
dominated by the M1 multipole.
For the measurement of the cross section, the determination of the coincidence acceptance is calculated with the Hall
C Monte Carlo simulation program, SIMC, which integrates the beam configuration, target geometry, spectrometer acceptances, resolution effects, energy losses and radiative corrections. The cross section is first averaged over the multidimensional phase space within the measured analysis bin, and is then followed by a kinematic translation procedure, namely bin centering corrections, that converts the cross section that has been averaged over finite phase space to a final point cross section extracted at the central kinematic
values of the phase space. For that part, theoretical predictions from various models are integrated in the simulation of the experiment and are studied over the same volume in phase space as the data. The bin centering corrections are small, typically 2% to
3%, indicating that the cross section tends to vary smoothly
and fairly symmetrically through the phase space. The systematic uncertainty to this correction is studied by employing different theoretical models as well as by applying variations to the size of the analysis bins, and has been found to be small compared to the experimental uncertainties.
The measurements were conducted at intermediate momentum transfer kinematics of Q^2=0.36 GeV^2. Cross sections were measured within a W range from 1210 MeV to 1250 MeV, with an extended coverage in the polar angle θ_π q^*, and a reach in the azimuthal angle ϕ_π q^* that extends from in-plane kinematics up to 50^∘ out-of-plane angles. A subset of the measured cross sections, for the in-plane kinematics, is shown in Fig. <ref>. The data are compared to the theoretical predictions of MAID <cit.>, DMT <cit.> and SAID <cit.>. The MAID and SAID calculations are primarily phenomenological, while the DMT contains explicit pion cloud contributions. An observation is that while the models follow a similar θ_π q^* dependence, they tend to disagree with each other in absolute magnitude, and occasionally with the data across the resonance region. Fig. <ref> gives an insight to the W dependence of the measured cross section. The MAID prediction tends to overestimate the measured cross sections at the lower wing of the resonance, similarly to what has been observed in previous measurements lower than Q^2=0.2 GeV^2 <cit.>.
Overall, improvements are in order for all the models, and the reported measurements provide new input and guidance towards this direction. The reported measurements are summarized in Table <ref> and Table <ref>. Fits of the resonant amplitudes have been performed at Q^2=0.36 GeV^2 while taking into account the background amplitude contributions from MAID and DMT. In these fits, the differences between the model descriptions of the background terms
results in a deviation of the fitted amplitudes, which is indicative of the level of the model uncertainty. We find that CMR=(-5.85 ± 0.28_exp± 0.20_mod)% and EMR=(-1.93± 0.50_exp± 0.10_mod)%.
The extracted quadrupole and magnetic dipole amplitudes are in good agreement with the trend of the world data and they deviate considerably from the Constituent quark model (CQM) predictions e.g. <cit.>, reconfirming that the color hyperfine interaction is inadequate to explain the effect at large distances. A more meaningful comparison is provided by the theoretical model predictions from MAID <cit.>, DMT <cit.>, SAID <cit.>, and the ChEFT calculation <cit.>, as shown in
Fig. <ref>. For the ChEFT <cit.>, an estimate of the model uncertainty is derived by calculating the magnitude of the next order terms in
the chiral expansion. This results to a theoretical uncertainty of ∼± 1% and ± 2 % for the EMR and the CMR ratios, respectively in the region around Q^2=0.2 GeV^2. The calculation is solidly based on QCD and successfully accounts for the magnitude of the effects for the EMR, while for the CMR a rapid divergence from the experimental measurements is observed above Q^2=0.2 GeV^2. The ChEFT calculation gives overall credence to the dominance of the meson cloud, nevertheless, the size of the theoretical uncertainties make the need for the next order calculation obvious. The reported data overlap with the low-Q^2 domain of the CLAS measurements <cit.> and confirm their findings. The data illustrate the potential of employing the experimental setup in Hall C for the study of the Δ(1232) resonance. A series of follow up experiments using the same experimental setup has been approved at JLab, and will expand these studies with high precision measurements within a wide kinematic range from Q^2=0.01 GeV^2 to 0.7 GeV^2. At low momentum transfers, they will allow an in-depth study of the mesonic cloud dynamics in a region were they are dominant and will provide a stringent test to the QCD prediction that the two quadrupole amplitudes converge at Q^2→ 0 <cit.>. At higher Q^2, the CLAS data suggest a steeper fall-off for the CMR compared to the findings of the high precision recoil polarization measurement of Hall A at Q^2=1 GeV^2 <cit.>, as seen in Fig. <ref>. Here, the upcoming measurements in Hall C will come to complement the CLAS data, adding to our understanding of the high-Q^2 dependence of the transition form factors.
In conclusion, we present cross section measurements of the π^∘ electroproduction reaction in the Δ(1232) resonance region, at intermediate momentum transfer kinematics of Q^2=0.36 GeV^2. The data provide a precise determination of the two quadrupole and of the magnetic dipole N→Δ transition form factors. The cross section measurements are found in reasonable agreement with theoretical calculations that include pion cloud contributions and with ChEFT calculations. At the same time, they indicate that some improvement is required to the theoretical calculations and they provide valuable input that will allow their refinements, thus offering valuable input towards the understanding of the nucleon dynamics.
We would like to thank the JLab Hall C technical staff and the Accelerator Division for their outstanding support. This
work has been supported by the US Department of Energy Office of Science, office of Nuclear Physics under contract
no. DE-SC0016577.
§ REFERENCES
elsarticle-num
99
Ru75 A. de Rujula, H. G. & Glashow, S. Phys. Rev. D 12, 147 (1975).
glashow Glashow, S. Physica 96A, 27 (1979).
soh Bernstein, A. & Papanicolas, C. Shapes of hadrons. AIP. Conf. Proc. 904, 1 (2007).
revmod Alexandrou, C., Papanicolas, C. & Vanderhaeghen, M. Rev. Mod. Phys. 84, 1231 (2012).
amb Bernstein, A. Eur. Phys. J. A 17, 349 (2003).
glas2 N. Isgur, G. K. & Koniuk, R. Phys. Rev. D25, 2394 (1982).
capstick Capstick, S. & Karl, G. Phys. Rev. D41, 2767 (1990).
pho2 Blanpied, G. Phys. Rev. Lett. 79, 4337 (1997).
pho1 Beck, R. et al. Phys. Rev. Lett. 78, 606 (1997).
pho1b Beck, R. et al. Phys. Rev. C61, 035204 (2000).
frol Frolov, V. et al. Phys. Rev. Lett. 82, 45 (1999).
pos01 Pospischil, T. et al. Phys. Rev. Lett. 86, 2959 (2001).
merve Mertz, C. et al. Phys. Rev. Lett. 86, 2963 (2001).
bart Bartsch, P. et al. Phys. Rev. Lett. 88, 142001 (2002).
Buuren van Buuren, L. et al. Phys. Rev. Lett. 89, 012001 (2002).
joo van Buuren, L. et al. Phys. Rev. C 70, 042201 (2004).
kun00 Kunz, C. et al. Phys. Lett. B. 564, 21 (2003).
Sparveris:2004jn Sparveris, N. F. et al. Phys. Rev. Lett. 94, 022003 (2005).
Kelly:2005jy Kelly, J. Phys. Rev. Lett. 95, 102001 (2005).
kelly Kelly, J. J. et al. Phys. Rev. C75, 025201 (2007).
Stave:2006ea Stave, S. et al. Eur. Phys. J. A30, 471–476 (2006).
ungaro Ungaro, M. et al. Phys. Rev. Lett. 97, 112003 (2006).
Blomberg:2015zma Blomberg, A. et al. Phys. Lett. B760, 267–272 (2016).
Blomberg:2019caf Blomberg, A. et al. Eur. Phys. J. A 55, 182 (2019). 1901.08951.
dina Alexandrou, C. et al. Phys. Rev. Lett. 94, 021601 (2005).
maid D. Drechsel, O. Hanstein, S.S. Kamalov, L. Tiator, Nucl. Phys. A 645, 145 (1999).
Sato:2000jf Sato, T. & Lee, T. Phys. Rev. C 63, 055201 (2001).
Kamalov:1999hs Kamalov, S. & Yang, S. N. Phys. Rev. Lett. 83, 4494–4497 (1999).
Kamalov:2001qg Kamalov, S., Chen, G.-Y., Yang, S.-N., Drechsel, D. & Tiator, L. Phys. Lett. B 522, 27–36 (2001).
SAIDweb R.A. Arndt et al., Phys. Rev. C 66, 055213 (2002). http://gwdac.phys.gwu.edu.
Elsner:2005cz Elsner, D. et al. Eur. Phys. J. A27, 91–97 (2006).
Sparveris:2006uk Sparveris, N. F. et al. Phys. Lett. B651, 102–107 (2007).
longpaper Stave, S. et al. Phys. Rev. C 78, 024209 (2008).
Aznauryan:2009mx Aznauryan, I. G. et al. Phys. Rev. C80, 055203 (2009).
villano Villano, A. N. et al. Phys. Rev. C 80, 035203 (2009).
kirkpatrick J. Kirkpatrick et al. Phys. Rev. C 84, 028201 (2011).
Sparveris:2013ena Sparveris, N. et al. Eur. Phys. J. A49, 136 (2013).
quarkpion1 D.-H. Lu, A. W. T. & Williams, A. G. Phys. Rev. C 55, 3108 (1997).
quarkpion2 U. Meyer, E. H. & Buchmann, A. J. Phys. Rev. C 64, 035203 (2001).
quarkpion3 M. Fiolhais, B. G. & Sirca, S. Phys. Lett. B 373, 229 (1996).
pasc Pascalutsa, V. & Vanderhaeghen, M. Phys. Rev. D 73, 034003 (2006).
hemmert Gail, T. A. & Hemmert, T. R. Eur. Phys. J A 28, 91 (2006).
hqm Sanctis, M. D. et al. Nucl. Phys. A 755, 294 (2005).
mande:94 Mandeville, J. et al. Phys. Rev. Lett. 72, 3325 (1994)
Butler:1993ht M. N. Butler, M. J. S. & Springer, R. P. Phys. Lett. B 304, 353 (1993).
Gellas:1998wx G. C. Gellas, C. N. K., T. R. Hemmert & Poulis, G. I. Phys. Rev. D 60, 054022 (1999).
Gail:2005gz Gail, T. A. & Hemmert, T. R. Eur. Phys. J. A 28, 91 (2006).
Pascalutsa:2002pi Pascalutsa, V. & Phillips, D. R. Phys. Rev. C 67, 055202 (2003).
lattice-2 Alexandrou, C. et al. Phys. Rev. D 83, 014501 (2011).
lattice-sh02 Alexandrou, C. et al. Phys. Rev. D 66, 094503 (2002).
lattice-sh09 Alexandrou, C. et al. Phys. Rev. D 79, 014507 (2009).
proposal-lowq Jefferson Lab proposal PR12-22-001, Measurement of the N→Δ Transition Form Factors at low four momentum transfers.
|
http://arxiv.org/abs/2409.03508v1 | 20240905132047 | Revealing Untapped DSP Optimization Potentials for FPGA-Based Systolic Matrix Engines | [
"Jindong Li",
"Tenglong Li",
"Guobin Shen",
"Dongcheng Zhao",
"Qian Zhang",
"Yi Zeng"
] | cs.AR | [
"cs.AR"
] |
BSTcontrol
Revealing Untapped DSP Optimization Potentials for FPGA-Based Systolic Matrix Engines
Jindong Li^1, 2, 4 Tenglong Li^1, 2, 4 Guobin Shen^1, 2, 3 Dongcheng Zhao^1,2 Qian Zhang^1, 2, 4 Yi Zeng^1, 2, 3, 4, 5
^1Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences
^2 Center for Long-term Artificial Intelligence
^3 School of Future Technology, ^4 School of Artificial Intelligence, University of Chinese Academy of Sciences
^5 Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Chinese Academy of Sciences
{lijindong2022, litenglong2023, shenguobin2021,zhaodongcheng2016, q.zhang, yi.zeng}@ia.ac.cn
Corresponding author: [email protected] and [email protected].
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Systolic architectures are widely embraced by neural network accelerators for their superior performance in highly parallelized computation. The DSP48E2s serve as dedicated arithmetic blocks in Xilinx Ultrascale series FPGAs and constitute a fundamental component in FPGA-based systolic matrix engines. Harnessing the full potential of DSP48E2s in architectural design can result in significant performance enhancements for systolic architectures on Ultrascale series FPGAs.
This paper unveils several previously untapped DSP optimization techniques capable of further enhancing FPGA-based systolic matrix engines.
We apply these techniques to two well-known systolic architectures: Google TPUv1 and Xilinx Vitis AI DPU.
With the proposed techniques, our design achieves substantial resource and power reduction compared to the open-source TPUv1 FPGA implementation and the Vitis AI DPU implementation in the same parallelism setting.
We also demonstrate the applicability of our techniques to neuromorphic hardware for supporting spiking neural network acceleration.
FPGA, DSP48E2, Accelerator, Systolic Array
§ INTRODUCTION
Recent years have witnessed the development of domain-specific hardware architectures tailored for deep learning. Systolic architectures have demonstrated their effectiveness and efficiency in handling matrix multiplication during neural network inference.
In FPGA-oriented design, building a high-performance systolic matrix engine is a non-trivial endeavor. While a naive or high-level synthesis (HLS) implementation may function, it falls short of unlocking the maximum performance potential. Achieving the pinnacle of performance for a systolic matrix engine on an FPGA demands an in-depth understanding of the FPGA's resource characteristics. In the case of the UltraScale series FPGA, harnessing the full potential of the basic systolic array's building block, DSP48E2<cit.>, becomes as a paramount consideration.
A significant amount of research has concentrated on DSP-centric optimization to unlock the performance improvement potential<cit.><cit.><cit.><cit.><cit.>. Several techniques have already become common practices in contemporary neural network accelerator designs.
Keeping pace with the evolution of neural network algorithms, new domain-specific hardware architectures have been proposed rapidly. Yet, the systolic architecture continues to play a crucial role as the foundational computing backend.
We recognize the necessity to revisit the DSP48E2 functionalities to determine if further optimization of FPGA-based systolic engines is possible. Our findings in this paper confirm that there are indeed several untapped DSP optimization potentials for FPGA-based systolic engines that can be beneficial to the FPGA-based accelerator research society.
In this paper, we focus on two classic systolic-based neural network accelerators: the Google TPUv1<cit.> and Xilinx Vitis AI DPU<cit.>. We present pratical DSP48E2 techniques that can enhance the performance of the systolic matrix engine in these two accelerators. Our contributions are listed as follows:
1) We propose an in-DSP operand prefetching techniques applicable to the weight stationary (WS) systolic engine. With such technique, our implementation shows substantial resources reduction and frequency improvement compared to currently widely adopted open-source TPUv1-like design.
2) We delve deeply into the systolic array implementation of the commercially encrypted Vitis AI DPU, identifying existing drawbacks in its design. Subsequently, we propose an in-DSP multiplexing technique and design a ring accumulator that can further reduce resource consumption and power consumption compared to the original official design.
3) We also demonstrate the application of our methods to neuromorphic hardware by offering enhanced implementations of the systolic-based spiking neural network (SNN) accelerators, FireFly<cit.>.
§ RELATED WORKS
DSP-based techniques play an important role in FPGA-based accelerators since DSP blocks serve as the fundamental computing hard blocks, yet being limited and scarce. Our focus in this paper is specifically on the UltraScale series FPGAs, which are widely favored in deep learning applications.
A common practice in neural network accelerator designs involves utilizing the wide input arithmetic unit within the DSP48E2 in UltraScale series FPGAs to enable SIMD low bit-width operations in quantized neural networks<cit.><cit.>.
Xilinx introduced a method that pack two 8-bit integer multiplications sharing a same operand into a single DSP48E2<cit.>. They further proposed another approach that pack the cross products between two pair of 4-bit operands into a single DSP48E2<cit.>. Zhang et al. proposed UInt-DSP6 packing and packed two more 4-bit multiplications into the DSP48E2 with appropriate overlaping in convolution application<cit.>. Sommer et al. generalized the multiplication packing technique and introduced an overpacking strategy<cit.>. FireFly<cit.> utilizes the SIMD mode of DSP48E2 to perform multiple synaptic operations for neuromorphic acceleration.
Another common practice in neural network accelerator designs involves leveraging the cascading paths in DSP48E2 for constructing systolic arrays. The DSP48E2 comprises three cascade paths: two to the input A and B ports and one to the output P port.
Xilinx's white paper suggests utilizing the partial sum cascaded chain at the output P port of the DSP48E2 for performing 8-bit integer dot product operations<cit.>.
Additionally, the Xilinx DSP48E2 datasheet recommends the adder-chain implementation over the adder-tree approach for FIR filter applications requiring high speed<cit.>.
Samajdar et al. optimally utilize the cascade chain at the input A and B ports in the DSP48E2 to support fixed-mode convolution<cit.>.
The DSP48E2's superior Fmax performance is also commonly utilized in neural network accelerator designs. The peak clock rate of DSP48E2 can exceed 700MHz for a device of the fastest speed grade<cit.>. To fully utilize its superior Fmax performance, the Xilinx DSP48E2 datasheet recommends employing the DSP48E2 in a time-multiplexed manner<cit.>. Following this guideline, Vitis AI DPUCZDX8G adopts a DSP double data rate (DDR) technique to enhance its systolic engine performance. FireFly v2<cit.> adopts the DDR technique on a neuromorphic SNN accelerator. Additionally, Wu et al. proposed a DSP supertile concept by utilizing the near DSP distributed RAM for fast operand fetching<cit.>.
While the potential for DSP48E2 optimization opportunities appears to have reached its end, this paper unveils several untapped tricks that can still greatly enhance performance.
§ DSP48E2 OVERVIEW
DSP48E2 serve as fundamental computing components in FPGAs, providing extensive parallelism opportunities for handling intensive computing workloads in neural network acceleration in Xilinx Ultrascale series FPGAs.
The DSP48E2 primarily comprises a 27-bit pre-adder, a 27× 18-bit multiplier, and an SIMD 48-bit accumulator, shown in Fig.<ref>. However, some circuits in the DSP48E2 do not directly contribute to arithmetic computations; instead, they play a crucial role as essential components for reconfigurable functionalities.
The flexible input pipelines inside the DSP48E2 feature two distinct pipeline registers with individual clock enables, along with a dynamic selector present on both the A and B input ports, enabling various input configurations.
The four wide-bus multiplexers route data from different ports to the four-input 48-bit ALU, enabling various dynamic user-controlled operating modes.
The dedicated cascade paths enable direct connections between adjacent DSP48E2s in the same column, enabling high-speed systolic-based applications.
In this paper, we uncover several DSP48E2 techniques by exploring these often-overlooked components in DSP48E2.
§ ENHANCING SYSTOLIC ENGINE OF TPUV1 ON FPGA
Google TPUv1<cit.> is a Tensor Processing Unit (TPU) designed for neural network machine learning. It incorporates a 256× 256 8-bit WS Multiply-Accumulate (MAC) matrix unit, delivering a peak throughput of 92 TOP/s.
In the classic WS systolic array design like TPUv1, weight data are fetched and cached near the PEs in advance, keeping stationary until the arrival of new sets of weight data. Input data flows horizontally into the systolic array staging into the next PE, while the partial sums flow vertically out of the array, accumulating along the way.
This architecture proves to be efficient for matrix multiplication, which is the computing backend for the widely used nn.Linear layer and nn.Conv2d layer.
tinyTPU<cit.> is the most widely adopted open-source FPGA-based TPUv1 design<cit.><cit.><cit.>. It large follows TPUv1 design but using a smaller systolic matrix size of five configurable choices ranges from 6× 6 to 14× 14 that can fit into edge FPGAs.
Libano's systolic array generator represents a state-of-the-art FPGA-based TPUv1-like systolic array implementation, serving as the DUT for its matrix multiplication error detection research<cit.>.
Libano's implementation incorporates INT8 packing tricks and the DSP DDR technique, techniques that are absent in tinyTPU, thus resulting in a superior design with enhanced performance. Fig.<ref>A shows a TPUv1 like 4× 4 systolic matrix engine.
§.§ Drawbacks in Existing Implementations
Despite the popularity of tinyTPU and the DSP-centric considerations in Libano's design, both implementations still fall short of achieving optimal performance.
tinyTPU does not employ the common INT8 packing techniques, resulting in half the computing density. Furthermore, activations at each row of the systolic array are broadcast to all columns of DSP48E2, instead of using pipelining registers. This approach leads to high fan-out and negatively impacts the frequency performance.
Libano's implementation fails to absorb the partial sums accumulating path into the DSP48E2, leading to excessive consumption of CLB resources.
In our implementation shown in Fig.<ref>B, we integrate a comprehensive set of techniques, including INT8 packing and in-DSP partial sums cascading. Furthermore, we have identified a critical yet often neglected aspect, the weight loading datapath, that could potentially become the performance bottleneck for the WS systolic array. Recognizing this, we introduce an innovative in-DSP operand prefetching technique designed to optimize the weight loading datapath.
§.§ Enhancement: In-DSP Operand Prefetching
In the WS systolic array, weights need to be preloaded into the array. To hide the preloading latency, each PE of the WS systolic array must instantiate two sets of registers. This setup enables the ping-pong update for loading the next set of weights.
In ASIC design, there's little room for optimizing register consumption. However, in FPGA design, it is possible to utilize existing resources for the ping-pong weight loading path rather than instantiate extra CLB flip-flops.
The DSP48E2 has two flexible input pipelines for input ports A and B, each comprising two registers with individual clock enables. These input pipelines can accept individual input from the general routing resources or share the same cascading path.
In this section, we demonstrate how to perform in-DSP operand prefetching by absorbing the weight ping-pong registers into the DSP48E2 pipelines, shown in Fig.<ref>.
In an FPGA-based WS systolic array, the accumulating datapath can be absorbed into the vertical DSP48E2 output cascading path.
The unused input cascaded path can be utilized for weight prefetching. Assuming the input pipeline for B is used for weight prefetching, Fig.<ref> illustrates the static configuration of the pipeline.
In this setup, B_1 registers serve as the operand prefetching path, while B_2 registers remain static across multiple computation rounds. New operands stream into the B_1 register chain until they reach the topmost DSP48E2. When operands cached in the B_2 registers expire, the clock enable signals trigger B_2 to accept new operands from the B_1 chain. The waveform of the clock enable signals for B_1 and B_2 registers is presented Fig.<ref>.
The entire operand prefetching process, along with the partial sums accumulating path, occurs entirely within the DSP, flowing vertically down the DSP48E2 column. This results in significant savings in CLB flip-flops.
§.§ Experiments
We compare our implementation with tinyTPU and Libano's implementation under Vivado's out-of-context mode. This mode allows us to independently assess the performance of the systolic matrix engine without the interference of other components.
Table.<ref> shows that our proposed implementation with the in-DSP operand prefetching technique, significantly reduces the usage of LUTs and FFs compared to Libano's implementation, while also achieving a considerable improvement in clock frequency compared to the tinyTPU.
§ ENHANCING SYSTOLIC ENGINE OF XLINX DPU
Xilinx developed the Vitis AI DPU as a machine learning solution for FPGA<cit.>.
DPUCZDX8G is the DPU IP core of the Vitis AI for the Zynq UltraScale series FPGA platforms<cit.>.
We begin by providing a brief introduction to the systolic architecture of the DPUCZDX8G. Despite the encryption of the DPUCZDX8G IP core, the documentation for DPUCZDX8G contains ample information about its architecture. Furthermore, the utilization reports and the actual implementation layout obtained in Vivado IDE have verified the accuracy of the public documented description.
DPUCZDX8G employs an output stationary (OS) systolic engine with three levels of parallelism, involving input output channel and pixel parallelism. Fig.<ref>A shows the B1024 configuration of DPUCZDX8G.
Each PE shown in Fig.<ref>B comprises a group of DSP48E2 chains performing vector inner product computation. This PE design fully leverages the DSP48E2's dedicated cascaded path to reduce routing complexity.
Bundles of activations flow vertically into the systolic engine, bundles of weights flow horizontally into the systolic engine, and the products continually accumulate in the PE accumulator.
DPUCZDX8G also incorporates a DDR technique, allowing the DSP48E2s to operate at twice the clock rates of other logic components. CLB multiplexers switch between two data portions from a low-speed clock domain, delivering one portion at a time to the high-speed DSP48E2.
Dot products generated by the DSP chain are transferred back to the slow clock domain through a set of flip-flops that perform the serial-to-parallel conversion. These results are then accumulated by the low-speed accumulator.
This clock domain decoupling design maximizes the utilization of the DSP48E2's exceptional Fmax performance, eliminating any lag in the low-speed fabric.
§.§ Drawbacks in DPUCZDX8G's Systolic Engine
While Xilinx's officially-developed systolic architecture of the DPUCZDX8G is already considered near-optimal and has demonstrated superior performance in Xilinx FPGA, it still exhibits several drawbacks as listed below:
1) The employment of CLB multiplexers inevitably consumes general routing resources and fabric logic. Furthermore, as CLB multiplexers span two clock domains, they impose stress on timing constraints. This may lead to a degradation of the clock frequency in the high-speed clock domain.
2) The cost associated with the DDR technique results in the harsh doubled bandwidth requirements for the weight data.
3) Each fast DSP48E2 chain requires two slow DSP48E2 accumulators in DPUCZDX8G. The Fmax performance of the DSP48E2 accumulator is underutilized, and the required number of accumulators in DPUCZDX8G is costly.
4) Extra LUTs and CARRY8s are required for the grouped partial sums combining and INT8 correction process.
In this section, we unveil several untapped DSP48E2 optimization potentials aimed at addressing these drawbacks.
We successfully absorb the CLB multiplexers introduced by DDR techniques into the DSP48E2, eliminating the need for excessive LUT usage.
We shift the burden of the doubled bandwidth requirement from the weight data to the output results, given that the output results' bandwidth is much smaller in the OS dataflow.
We move the DSP48E2 accumulator from the slow clock domain to the fast clock domain and halve the number of required accumulators without affecting the throughput, thereby significantly reducing DSP48E2 consumption.
We tackle these issues through two new DSP48E2 techniques outlined below and shown in Fig.<ref>C and Fig.<ref>D..
§.§ Enhancement: In-DSP Multiplexing
In DPUCZDX8G, each row of the systolic matrix engine shares the same weights. Two portions of weights from the slow clock domain need to be multiplexed into the fast clock domain and streamed into the systolic engine, progressing through one flip-flop stage per processing element.
In our approach, the need for LUT multiplexing is eliminated, and the staging registers operate at the slow clock domain. This not only significantly alleviates timing closure pressure but also reduces power consumption.
We make use of the two flexible input pipelines to perform in-DSP multiplexing. In the DSP48E2's input pipelines, the pathway from the input port to the multiplier can be configured statically through attribute settings or dynamically switched using a multiplexer.
For simplification in the illustration, we focus on the multiplication between input port A and input port B without any data packing process.
While data stream into the DSP48E2 from the slow clock domain, the DSP48E2 itself operates at a doubled data rate. We designate the slow clock as Clk_× 1 and the fast clock as Clk_× 2.
As shown in Fig.<ref>, input port A is set up with a simple 2-stage pipeline. Activations are streamed into the A_1 and A_2 registers. The clock enable pins for A_1 and A_2 are consistently high, and new data updates A_1 and A_2 every Clk_× 1 cycle.
Conversely, input port B is configured with a ping-pong datapath. Weights are streamed into B_1 and B_2 registers in a ping-pong manner, controlled by the independent clock enable pins for B_1 and B_2. New data updates B_1 or B_2 every two Clk_× 1 cycles.
The multiplexer in the input port B pipeline plays a crucial role: it switches between B_1 and B_2 registers at the speed of Clk_× 2. This enables the multiplier to execute the cross product between activations a_t, a_t+1 and weights w_t, w_t+1 in adjacent Clk_× 1 cycles. As a result, it yields a_tw_t, a_tw_t+1, a_t+1w_t, a_t+1w_t+1, four results every two Clk_× 1 cycles, achieving DDR multiplication.
The streaming activations and weights can be pre-arranged in such interleaved manner in advance.
All the aforementioned signals are depicted in the waveform in Fig.<ref> to provide a clearer illustration.
INT8 packing technique can also be applied in combination with the proposed in-DSP multiplexing techniques. Since the A plus D pre-adder datapath, which is also a 2-stage pipeline, has no effect on the ping-pong datapath of the B input port.
Furthermore, cascading is applicable. Cascading N DSP48E2 units with such settings, combined with INT8 packing, results in a processing element capable of performing matrix multiplication between 4× N activations and N× 2 weights every two Clk_× 1 cycles.
Our proposed DSP48E2 chain achieves an equivalent computing density to the DSP48E2 chain in DPUCZDX8G, but eliminating the need for LUT multiplexing and halving the weight data bandwidth at the same time.
§.§ Enhancement: Ring Accumulator
In DPUCZDX8G, each DSP48E2 chain generates two pair of independent INT18 partial sums every Clk_× 2 cycle. The partial sums are transferred back to Clk_× 1 domain by serial-to-parallel conversion.
The official implementation, where products generated by DSP48E2 chains in the same group are combined using LUT adder-tree and each combined partial sum is processed by an SIMD=ONE48 DSP48E2 accumulator (which adds an INT26 bias using the DSP pre-adder and produce 29-bit final results), is not optimal.
Dealing with INT18 partial sums, INT26 bias, and INT29 final results leads to underutilization of the 48-bit DSP48E2 accumulator.
Our implementation reduce both the bias precision and the precision of the accumulator to INT24. This adjustment results in only a minor loss in precision but allows for an efficient alignment with the SIMD=TWO24 feature of the DSP48E2.
In our in-DSP multiplexing method, each DSP48E2 chain produces four pairs of INT18 partial sums every four Clk_× 2 cycles.
Rather than transferring the partial sums back to the Clk_× 1 domain and utilizing LUT adders for group combining, we design a ring accumulator composed of only two cascaded DSP48E2s that also operate at Clk_× 2, handling the combining of partial sums, the addition of bias, and the accumulation process altogether, as depicted in Fig.<ref>.
The two groups, four pairs of partial sums sending to the ring accumulator are accumulated sequentially. The accumulator introduces a latency of two, and the outputs of the accumulator undergo a delay using two registers. The delayed outputs are connected back to the DSP48E2, aligning with the next iteration of accumulation. By adopting this approach, the number of DSP48E2 accumulators is effectively reduced by half.
It's noteworthy that the correction required by the INT8 packing does not demand extra LUTs or CARRY8 logic. We leverage the rounding constant at the W multiplexer inside the DSP48E2 to handle the compensation. It is also worth noting that the delay registers in the ring accumulator loop can be repurposed for the serial-to-parallel conversion to transfer the accumulated results back to Clk_× 1 domain, shown in Fig.<ref>.
In contrast to DPUCZDX8G's accumulator, our accumulator generates four pairs of sums instead of two. This difference is due to the burden of doubled weight bandwidth in DPUCZDX8G's implementation, now placed on the output. This is not expected to become a performance bottleneck as the output bandwidth, amortized over time, is generally small in OS dataflow, particularly considering that accumulation cycles are typically large in convolutional neural networks.
§.§ Experiments
DPUCZDX8G is a commercially encrypted IP, making it impossible to access its original module-level breakdown design directly for experimentation.
Although the module hierarchy of the encrypted DPUCZDX8G is explicitly hidden, it can still be obtained from the Vivado resource utilization report.
While the netlists of the encrypted DPUCZDX8G are also explicitly hidden, they remain implicitly visible in the Vivado implementation device view GUI when selecting the physical routing wire, shown in Fig.<ref>. Additionally, we can still use the Vivado find function to count cells or nets with the same naming prefix.
Through these methods and existing public documentation of the DPUCZDX8G, we were able to unravel the intricate design of the Vitis AI DPU. We obtain the resource utilization breakdown of the DPUCZDX8G systolic engine in B1024 configuration, shown in Table.<ref>.
To ensure a fair assessment solely on the systolic architecture, we recreate a one-to-one systolic matrix engine of the DPUCZDX8G B1024 to compare with our proposed implementation under Vivado's out-of-context mode.
Our replication closely adheres to the resource utilization breakdown of the critical systolic array component of B1024, removing other non-critical components to simplify the comparison and align with our proposed implementation, ensuring a fair comparison.
As shown in Table.<ref>, our proposed implementation achieves a substantial 85% and 20% reduction in total LUTs and FFs compared to the official B1024 replicate. We have also managed to halve the number of DSP48E2 accumulators. Moreover, under the same frequency, we have achieved a 20% reduction in power and gained much more timing margin.
§ APPLICABILITY ON SNN ACCELERATOR
In this section, we show that aforementioned techniques can seamlessly be applied to systolic-array-based SNN accelerators.
FireFly<cit.> represents a state-of-the-art SNN accelerator, employing a typical WS systolic array design. It leverages the wide-bus multiplexers in DSP48E2 units for spike-based computation.
In FireFly's implementation, two sets of synaptic weights are presented on the A B port and the C port, respectively, with the weights on the A B port being concatenated within the DSP48E2, as illustrated in Fig.<ref>. Utilizing the SIMD=FOUR12 mode of the DSP48E2 allows a single DSP48E2 unit to function as a 2× 4 synaptic crossbar.
Our in-DSP operand prefetching technique can use both the A and B input pipelines along with their cascaded paths for synaptic weight prefetching, shown in Fig.<ref>.
While the use of CLB flip-flops is unavoidable for weights presented on the C port due to the absence of C cascaded paths in DSP48E2, the overall requirement for weight ping-pong CLB registers is still greatly reduced.
As indicated in Table.<ref>, the total flip-flops consumption is reduced by half compared to the original implementation, accompanied by a noticeable drop in power consumption.
§ CONCLUSION
In this paper, we revisit the functionalities of the DSP48E2, exploring its optimization potential and uncovering several underutilized techniques that can enhance the performance of systolic matrix engines on UltraScale series FPGAs. We discuss the in-DSP operand prefetching technique, in-DSP multiplexing technique, and the ring accumulator technique, which can be broadly applied to both WS and OS systolic arrays, including those used for neuromorphic SNN computing. We believe that our contibutions offer insights for researchers and developers aiming to build DSP-optimized hardware.
§ ACKNOWLEDGEMENT
This work is supported by the Chinese Academy of Sciences Foundation Frontier Scientific Research Program (ZDBS-LY- JSC013).
This work is part of the software-hardware codesigns research of the Brain-inspired Cognitive Engine (BrainCog)<cit.><cit.>.
We would also like to express our gratitude to Niansong Zhang from Cornell University for providing valuable suggestions on the paper.
IEEEtran
|
http://arxiv.org/abs/2409.02283v1 | 20240903203640 | Saturation of magnetised plasma turbulence by propagating zonal flows | [
"Richard Nies",
"Felix Parra",
"Michael Barnes",
"Noah Mandell",
"William Dorland"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
APS/123-QED
[email protected]
^1Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08543, USA
^2Princeton Plasma Physics Laboratory, Princeton, NJ 08540, USA
^3Rudolf Peierls Centre for Theoretical Physics, University of Oxford, Oxford OX1 3NP, United Kingdom
^4Department of Physics, University of Maryland, College Park, MD 20742, USA
§ ABSTRACT
Strongly driven ion-scale turbulence in tokamak plasmas is shown to be regulated by a new propagating zonal flow mode, the toroidal secondary, which is nonlinearly supported by the turbulence. The mode grows and propagates due to the combined effects of zonal flow shearing and advection by the magnetic drift. Above a threshold in the turbulence level, small-scale toroidal secondary modes become unstable and shear apart turbulent eddies, forcing the turbulence level to remain near the threshold. By including the new zonal flow physics into a theory of turbulence saturation based on the critical balance conjecture, scaling laws for the turbulent heat flux, fluctuation spectra, and zonal flow amplitude are derived and shown to be satisfied in nonlinear gyrokinetic simulations.
Saturation of magnetised plasma turbulence by propagating zonal flows
W. Dorland^4
September 9, 2024
=====================================================================
Introduction.– Efforts to achieve the high temperatures required for thermonuclear fusion in magnetically confined plasmas are stymied by the large heat flux caused by turbulent mixing. The turbulent fluctuations are driven by micro-instabilities, most notably the ion temperature gradient (ITG) instability <cit.>. The saturation level of such ion-scale modes crucially depends on zonal flows (ZFs) <cit.>, flow bands that are nonlinearly generated by the turbulence <cit.> and can shear apart turbulent eddies. In toroidal plasmas, the linear ZF physics allows for stationary ZFs <cit.> and fast oscillating geodesic acoustic modes (GAMs) <cit.>.
In this Letter, we show that strongly driven tokamak ITG turbulence exhibits a new small-scale propagating ZF mode, the toroidal secondary. We elucidate the physics of the new mode, and demonstrate its relevance to turbulence saturation. We then show how the application of the critical balance conjecture to ITG turbulence <cit.> must be altered to account for the role of the toroidal secondary. The revised theory leads to new scalings of the fluctuation amplitude and length scales, which are found to be in agreement with nonlinear gyrokinetic simulations and with past experimental observations <cit.>.
Tokamak turbulence.– To avoid fast parallel losses, the magnetic field of tokamaks is made to lie on nested toroidal `flux surfaces'. The plasma quickly reaches local thermodynamic equilibrium and is thus approximately Maxwellian, with the density n_s and temperature T_s of each species s uniform on flux surfaces. The resulting radial gradients between the hot dense core and the cold dilute edge drive turbulent fluctuations which cause transport across flux surfaces. These fluctuations are slow compared to the Larmor gyration frequency Ω_s = e_s B /m_s, with the particle mass m_s and charge e_s, and B the magnetic field strength. The fast gyratory motion may thus be averaged over, and the turbulence may be modelled using gyrokinetics <cit.>, which describes rings of charge centered on the gyrocentre position R, shifted from the particle position r by the gyroradius vector.
Moreover, in the tokamak core, the fluctuations in the distribution function f_s are generally small compared to the Maxwellian background F_Ms, i.e., δ f_s = f_s - F_Ms≪ F_Ms, and they are highly anisotropic. Indeed, the typical turbulence length scales parallel to the magnetic field are of the order of the tokamak size, as measured e.g. by the major radius R, while in the perpendicular direction they are on the much smaller gyroradius scale ρ_s = v_Ts/Ω_s ≪ R, with the thermal speed v_Ts=(2 T_s/m_s)^1/2. The fluctuations may thus be described locally in a flux-tube domain, as shown in Fig. <ref>, taking the magnetic field geometry to only vary along the field line on a chosen flux surface denoted by r_0. Here, we assume the tokamak flux surfaces to have circular cross-section, and we choose the radial coordinate to be x=r-r_0, with r the radial distance from the magnetic axis. As x labels flux surfaces, its gradient is parallel to the density and temperature gradients. The binormal coordinate y and the parallel coordinate θ determine the position within a flux-surface: y, like x, is perpendicular to B, whereas θ gives the location along B. Then, the flux-surface average ⟨ A ⟩_yθ of a given quantity A is given by its average in y and θ, the latter including the Jacobian 1/B·∇θ. Finally, we also define the zonal and nonzonal components of A through A^Z≡⟨ A ⟩_y = A - A^NZ.
In this Letter, we consider electrostatic and collisionless ion-scale fluctuations, for a single ion species with charge e_i = Z_i e. The gyrokinetic equation may be written in terms of the non-adiabatic part of the perturbed ion distribution function h_i = δ f_i + F_Mi e_i φ/T_i, using the particle energy and magnetic moment as velocity-space coordinates,
∂_t ( h_i- F_Mi e_i φ /T_i ) + (v_∥ + ṽ_M) ·∇ h_i
+ v_E·∇( F_Mi + h_i) = 0,
where the fluctuating electrostatic potential φ is self-consistently determined by quasineutrality
T_i/e_i n_i∫d^3 v h_i = τ( φ - ⟨φ⟩_yθ) + φ,
with τ = T_i/Z_i T_e and = B/B. The overlines indicate gyro-averages, with the φ average taken at fixed gyrocentre position R, and h_i at fixed real space position r. The electrons were assumed in (<ref>) to respond adiabatically only to potential variations within flux surfaces φ-⟨φ⟩_yθ, as they cannot move radially due to their small gyroradii. The gyrokinetic equation includes advection by the magnetic drift ṽ_M and by the gyro-averaged E×B-drift v_E = ×∇φ/B, with the latter determining the time-and-volume-averaged radial heat flux ⟨ Q ⟩_txyθ, with
Q = 1/⟨ ∇ x⟩_θ∫d^3 v m_i v^2/2h_iv_E ·∇ x ,
which is of primary interest to ascertain the level of transport caused by turbulent fluctuations.
Propagating zonal flows.– In gyrokinetic simulations of strongly driven ITG turbulence, ZF activity is apparent at multiple scales, see Fig.<ref>, showing large-scale stationary ZFs and small-scale propagating ZFs. The corresponding Fourier spectrum in Fig.<ref> exhibits stationary ZFs and GAMs at large radial scales k_x ρ_i ∼ 0.1, and a new propagating ZF mode, the toroidal secondary. It is predominant at smaller scales k_x ρ_i ∼ 0.5, and has a frequency ω∼ k_x v_Mx set by the radial magnetic drift velocity v_Mx≡ρ_i v_Ti/R, defined such that
ṽ_Mx = v_Mxsinθv_∥^2 + v_⊥^2/2/v_Ti^2
in a large aspect-ratio tokamak of circular cross-section.
Unless specified otherwise, the gyrokinetic simulations presented in this Letter are performed using <cit.>. The simulations model a flux surface at half-radius r_0=a/2 of the Cyclone Base Case, a tokamak with flux surfaces of circular cross-section and inverse aspect ratio R/a=2.8, where a is the tokamak minor radius. The safety factor q gives the magnetic field line pitch and sets the connection length L_∥ = q R, with 2π L_∥ the distance along the field line corresponding to one poloidal turn. The safety factor is varied between simulations, as is the ion temperature gradient R/L_T≡ -R dln T_i/dr, while the density gradient and other quantities are held fixed at the usual Cyclone Base Case values. These are listed in the Supplementary Material, alongside the numerical parameters used in all simulations.
Compared to the small-scale, fast-oscillating toroidal secondary, the background turbulence has long radial wavelengths and evolves slowly in time, as will be shown explicitly below. The physics of the toroidal secondary may therefore be understood by considering a secondary mode growing and propagating over a background stationary (∂_t=0) streamer (k_x=0) mode, referred to as the primary mode and taken to be representative of the turbulence. This secondary model was originally considered by <cit.>, where the magnetic drift and parallel streaming in (<ref>) were ignored, leading to a purely growing mode. As shown in Fig. <ref>, gyrokinetic simulations of the secondary additionally exhibit the small-scale oscillating ZFs observed in the fully nonlinear gyrokinetic simulations, and a purely growing mode is observed only at large primary drive or long ZF wavelengths. We note that dispersion relations for the various modes may be derived analytically using the secondary formalism. These will be presented in future work, as we here focus on the phenomenology and physical mechanism of the toroidal secondary, and its effect on the turbulence.
To understand the mechanism for ZF drive and propagation, let us consider the vorticity equation at long perpendicular wavelengths,
n_i ∂_t ⟨ρ_i ∇ x∂_x v_E^Z⟩_θ =
v_Ti∂_x ⟨∫d^3 v ( (v_⊥^2 /2Ω_i^2) ∇ x ·∇v_E ·∇ h_i + ṽ_Mxh_i )⟩_yθ,
which is derived from the time-derivative and flux-surface average of (<ref>). The vorticity equation describes the evolution of the ZF velocity v_E^Z=∇ x∂_x ⟨φ⟩_y/B due to nonlinear Reynolds and diamagnetic stresses, and due to the ṽ_Mx-induced Stringer-Winsor (SW) force <cit.>, which is dominant for the toroidal secondary. The SW force results from up-down asymmetries in (P+e_i n_i φ)^Z, with the fluctuating pressure
P ≡∫d^3 v m_i/2( v_∥^2 + v_⊥^2/2) δ f_i,
as the magnetic drift points radially outward above the tokamak midplane and inward below it, reflected in the sinθ of (<ref>) and sketched in Fig. <ref>.
As illustrated in Fig. <ref>, the (P+e_i n_i φ)^Z asymmetry can be generated by the ZF itself, through the compression induced by the varying ZF velocity v_E^Z∝ 1/B between the tokamak outboard and inboard sides. In the case of the toroidal secondary, up-down asymmetry in P^Z is also generated nonlinearly, due to the radial magnetic drift (<ref>) advecting eddies sheared by the zonal flow, as explained in Fig. <ref>. The (P+e_i n_i φ)^Z asymmetry is generated on flux surfaces neigbouring the ZF perturbation, causing the mode to propagate and also grow provided the up-down asymmetry is sufficiently large. At long radial wavelengths, the ZF inertia in (<ref>) (the term with ∂_t) becomes negligible, such that the two mechanisms of (P+e_i n_i φ)^Z asymmetry generation described in Fig. <ref> must cancel each other. Toroidal secondary modes therefore oscillate more slowly than GAMs (see Fig. <ref>), for which the compressibility-induced SW force is balanced by the ZF inertia. We note that the toroidal secondary mechanism is distinct from that of <cit.>, who consider the pressure asymmetry generation due to a combination of ZF shearing and the effect of magnetic shear ŝ.
The physical picture outlined above did not include the effects of parallel streaming, which acts to short-circuit the up-down (P+e_i n_i φ)^Z asymmetries. Therefore, the toroidal secondary is only found at short radial wavelengths, where ω∼ k_x v_Mx≳ v_∥·∇∼ v_Ti/qR. At longer radial wavelengths, the ZFs do not oscillate, as seen in the gyrokinetic simulations of the secondary (Fig. <ref>) or of the nonlinear turbulence (Fig. <ref>). Furthermore, to avoid kinetic damping by the magnetic drift, the toroidal secondary mode requires a sufficiently large primary drive v_Ex≳ v_Mx to become unstable, as shown in Fig. <ref>. Finally, the toroidal secondary is stabilised by finite Larmor radius effects, such that the growth rate peaks at k_x ρ_i ∼ 0.5 in Fig. <ref>, explaining the prominent ZFs at this scale in fully nonlinear simulations (Fig. <ref>).
Zonal flows and turbulence saturation.– To ascertain the role of the various zonal flow modes in setting the turbulence saturation level, we consider the energy transfer in k_x caused by ZFs at different scales. Let us define the low-pass-filtered distribution
h_i, K_x≡∫_-K_x^K_xdk_x ĥ_i(k_x) e^i k_x X,
where the Fourier modes ĥ_i(k_x) are normalised such that h_i = lim_K_x →∞ h_i, K_x. Then, the low-pass filtered gyrokinetic free-energy
ℰ_K_x≡∑_s T_s ⟨∫d^3 v ( δ f_s,K_x)^2/2 F_Ms⟩_xyθ
may be shown to evolve as
∂_t ℰ_K_x + 𝒯_K_x = ℐ_K_x - 𝒟_K_x,
with the energy transfer rate
𝒯_K_x = ⟨ T_i ∫d^3 v h_i, K_x/F_Miv_E ·∇ h_i ⟩_xyθ,
the injection rate
ℐ_K_x = L_T^-1⟨∫d^3 v m_i v^2/2h_i, K_xv_E ·∇ x ⟩_xyθ,
and a dissipation rate 𝒟_K_x, e.g. from collisions or numerical dissipation.
In Fig. <ref>, the small-scale ZFs are shown to play a crucial role in transferring energy from large to small radial scales, as they are the largest contributors to 𝒯_K_x at radial wavenumbers larger than the toroidal secondary scale K_x ρ_i ∼ 0.5 - 0.7; for larger K_x the nonzonal contribution is dominant. The contribution to 𝒯_K_x from the large-scale ZFs is subdominant, though we note that numerical experiments artificially removing the ZFs at small k_x were found to alter the turbulence saturation level.
Turbulence scaling laws.– We now consider how the toroidal secondary modes may be incorporated in a theory of turbulence saturation. Strongly driven ITG turbulence in tokamaks has previously been modelled <cit.> using the critical balance conjecture <cit.>, which posits the nonlinear transfer rate ω_NL and parallel propagation rate ω_∥ of the saturated turbulence to be of the same order: at every scale, ω_NL∼ v_Ex/l_x ∼ω_∥∼ v_Ti / l_∥, with the rates estimated from (<ref>). Here, l_x, l_y, l_∥ are the radial, binormal, and parallel length scales of the turbulence. At the turbulence outer scale, denoted by `o' superscripts, ω_NL must also balance the energy injection rate ω_⋆^T, such that ω_NL^o ∼ω_⋆^T,o∼ (v_Ti/L_T) ρ_i/l_y^o. Furthermore, the parallel outer length scale in a tokamak is assumed to be the connection length l_∥^o ∼ L_∥ = q R.
In the original theory of <cit.>, the turbulent eddies were assumed to be isotropic in the directions perpendicular to the magnetic field, i.e., l_x ∼ l_y. In fact, the turbulence is anisotropic at the outer scale due to the small-scale ZFs, with the turbulence level regulated to be near the marginal stability threshold of the toroidal secondary, v_Ex∼ v_Mx (see Fig. <ref>). This corresponds to a grand critical balance at the outer scale, ω_NL^o ∼ω_∥^o ∼ω_⋆^T,o∼ω_Mx^o with ω_Mx∼ v_Mx/l_x, in agreement with previously unexplained experimental observations <cit.>.
Grand critical balance leads to the following scalings for the turbulence length scales, amplitude, and heat flux at the outer scale:
l_y^o/ρ_i∼q R/L_T, l_x^o/ρ_i∼ q, e_i φ^o/T_iR/ρ_i∼q R/L_T, ⟨ Q ⟩_txyθ/Q_gB∼q R/L_T,
with the gyro-Bohm heat flux Q_gB = n_i T_i v_Ti (ρ_i/R)^2. Notably, the heat flux is predicted to scale only linearly with the background temperature gradient, instead of the cubic scaling predicted for perpendicularly isotropic eddies <cit.>[The study by <cit.> also compared their scalings with gyrokinetic simulations of the Cyclone Base Case. However, due to computational limitations, the range of temperature gradients was limited to R/L_T ≤ 17.5, where a cubic fit is a good approximation to an offset linear dependence, see Fig. <ref>.]. This revised heat flux scaling is satisfied in gyrokinetic simulations, as shown in Fig. <ref>. Furthermore, we verify the grand critical balance scalings for the turbulence amplitude and outer scale by considering the fluctuation spectra, as shown in Figs. <ref> and <ref>.
Finally, we note that, as q is increased, there is a clear scale separation between the turbulence outer scale k_x^o ρ_i ∼ 1/q, and the toroidal secondary scale k_x ρ_i ∼ 0.5, where the turbulence spectrum exhibits only a small local peak, see Fig. <ref>. Therefore, the grand critical balance scalings (<ref>) also justify the secondary model considered in Figs. <ref> and <ref> to describe the toroidal secondary mode, as the turbulence evolves over long spatio-temporal scales k_x^o ρ_i ∼ 1/q and ω_NL^o ∼ v_Ti/qR compared to the toroidal secondary's high wavenumber k_x ρ_i ∼ 1 and frequency ω^ZF∼ v_Ti/R. Due to the fast ZF oscillation, the shearing of turbulent eddies is not coherent, and must be modelled diffusively instead. To affect turbulence saturation, the ZFs must cause diffusion over an eddy length scale l_y^o in a nonlinear time, i.e., (l_y^o)^2 ∼ D/ω_NL^o, where D∼ (v_E^Z)^2 / ω^ZF is the diffusivity due to ZFs. Using (<ref>), the zonal flow energy
E^ZF≡e_i^2/2n_i T_i⟨∫d^3 v φ^Z( φ^Z - φ^Z) F_M ⟩_txθ
is thus predicted to scale as
E^ZF/T_i≈⟨(v_E^Z/v_Ti)^2 ⟩_txθ∼ q ( R/L_T)^2 ( ρ_i/R)^2.
Here, the quantity φ^Z in (<ref>) corresponds to two successive gyro-averages, first at fixed R and then at fixed r and were approximated in the long-wavelength limit k_⊥ρ_i ≪ 1 in (<ref>). As shown in Fig. <ref>, the predicted ZF scaling (<ref>) is well satisfied by the small-scale toroidal secondary modes in gyrokinetic simulations.
Discussion.– In this Letter, we have presented the toroidal secondary, a new mode which leads to small-scale propagating zonal flows in nonlinear gyrokinetic simulations of ITG turbulence (Fig. <ref>). The toroidal secondary regulates the turbulence saturation level to be near the threshold turbulence amplitude v_Ex∼ v_Mx for which the toroidal secondary mode becomes unstable (Fig. <ref>). As a consequence, the turbulence follows the grand critical balance previously observed experimentally <cit.>, leading to turbulence scaling laws (<ref>) satisfied in gyrokinetic simulations (Fig. <ref>).
We note that toroidal secondary modes resulting from electron-scale turbulence could differ considerably in character from those presented in this Letter, as the ZF inertia will play a more important role. For ion scale modes, the ZF inertia does not contribute significantly to the toroidal secondary mechanism due to the modified adiabatic electron response in (<ref>), which leads to a small ZF inertia at long wavelengths, see (<ref>).
The toroidal secondary modes shown in Fig. <ref> are reminiscent of the avalanches previously observed in gyrokinetic simulations <cit.>, whose physical mechanism remains debated. Recent efforts <cit.> to model the avalanches have considered the coupling of zonal flows with both the turbulence and with toroidal geometric effects, though nonlinear fluxes were neglected, precluding the toroidal secondary modes presented here (see Fig. <ref>). From Fig. <ref>, the toroidal secondary is seen to have a typical propagation velocity ∼ 2 v_Tiρ_i/R, comparable to previously reported avalanche propagation speeds <cit.>. Furthermore, the turbulence amplitude threshold required to destabilise the toroidal secondary could explain the sandpile-like behaviour associated with avalanches.
This work was supported by U.S. DOE DE-AC02-09CH11466 and DE-FG02-93ER54197, by Scientific Discovery Through Advanced Computing (SciDAC) Grant No. UTA18000275, and by the Engineering and Physical Sciences Research Council (EPSRC) [EP/R034737/1]. The simulations presented in this article were performed on computational resources managed and supported by Princeton Research Computing, a consortium of groups including the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology's High Performance Computing Center and Visualization Laboratory at Princeton University.
§ GYROKINETIC SIMULATION DETAILS
All simulations presented in this Letter modelled a flux-tube extending for a single poloidal turn in the Cyclone Base Case tokamak. In addition to the parameters given in the main text, a magnetic shear of ŝ = dln q/dln r = 0.8 was used, and the density gradient was held fixed at R/L_n ≡ -R dln n_i/dr = 2.2. A hydrogen plasma with Z_i=1 was considered, and the electron to ion temperature ratio was set to τ=1.
The numerical parameters employed in the <cit.> gyrokinetic simulations are provided in Table <ref>. The quantities L_x/ρ_i and L_y/ρ_i denote the box size in the radial and binormal directions, respectively, and may be related to the minimum wavenumber through k_min = 2π/L. Furthermore, N_θ gives the parallel resolution, while N_x and N_y denote the number of Fourier modes in the radial and binormal directions. The velocity resolution is given by N_μ and N_v_∥ for the magnetic moment and the parallel velocity, respectively, with v_∥/v_Ti∈ [-3.5,3.5] for all simulations. Finally, D_h denotes the hyperdissipation parameter.
As shown in Table <ref>, the simulations using the code <cit.> could be run at fixed numerical parameters as κ and q were varied, as the computational cost is small owing to the use of GPUs. In the κ-scan, temperature gradient values of κ∈[6.9, 13.9, 20.9, 27.8, 34.7, 41.7, 48.6] were considered, at fixed safety factor q=1.4, while for the q-scan, κ=13.9 and q ∈ [1.4, 2.8, 4.2, 5.6, 7.0]. The velocity resolution is here given by the number of Hermite and Laguerre moments N_h and N_l, respectively.
|
http://arxiv.org/abs/2409.02429v1 | 20240904041658 | Training-free Color-Style Disentanglement for Constrained Text-to-Image Synthesis | [
"Aishwarya Agarwal",
"Srikrishna Karanam",
"Balaji Vasan Srinivasan"
] | cs.CV | [
"cs.CV"
] |
[
Training-free Color-Style Disentanglement for Constrained
Text-to-Image Synthesis
Aishwarya Agarwal, Srikrishna Karanam, and Balaji Vasan Srinivasan
Adobe Research, Bengaluru India
.7{aishagar,skaranam,balsrini}@adobe.com
September 9, 2024
===================================================================================================================================================
type=figure
< g r a p h i c s >
We propose the first training-free approach to allow disentangled conditioning of text-to-image diffusion models on color and style attributes from reference images.
]
empty
§ ABSTRACT
We consider the problem of independently, in a disentangled fashion, controlling the outputs of text-to-image diffusion models with color and style attributes of a user-supplied reference image. We present the first training-free, test-time-only method to disentangle and condition text-to-image models on color and style attributes from reference image. To realize this, we propose two key innovations. Our first contribution is to transform the latent codes at inference time using feature transformations that make the covariance matrix of current generation follow that of the reference image, helping meaningfully transfer color. Next, we observe that there exists a natural disentanglement between color and style in the LAB image space, which we exploit to transform the self-attention feature maps of the image being generated with respect to those of the reference computed from its L channel. Both these operations happen purely at test time and can be done independently or merged. This results in a flexible method where color and style information can come from the same reference image or two different sources, and a new generation can seamlessly fuse them in either scenario (see Figure <ref>).
§ INTRODUCTION
We consider the problem of conditioning the text-to-image class of diffusion models <cit.> on color and style attributes extracted from a user-provided reference image. In particular, we want to independently control outputs of text-to-image models with either or both of these attributes, necessitating disentangled color and style conditioning. Furthermore, we seek to do so in a completely training-free and test-time-only fashion. This is practically an important problem since (a) such disentangled control means color and style information can now come from two different sources, and a new generation conditioned on them can be fused to produce an image with color from the first source and style from the second source, and (b) a test-time and training-free solution means one does not have to keep training models each time reference images change.
While there has been much work in customizing text-to-image generation <cit.> with reference images, most of these techniques lack explicit control over which attributes from the reference are to be reflected in the synthesized images. Further, while there have been some attempts at training-free customization approaches, they all focus on a specific aspect, e.g., appearance transfer <cit.> or style transfer <cit.> or ensuring subject consistency <cit.>. None of these training-free methods are able to achieve disentangled attribute transfer that we seek to achieve in our work. Next, training-based methods such as MATTE <cit.> proposed a way to allow attribute-conditioned image synthesis but it needed (a) optimizing textual tokens that may take hours depending on compute and reference image, and (b) a separate custom loss function to achieve disentanglement between color and style. Some other training-based methods such as ProSpect <cit.>, while doing multi-attribute conditioning, are also not able to disentangle color and style despite training tokens. Consequently, we ask, and answer affirmatively, two key questions- (a) can we achieve test-time-only conditioning of text-to-image models with color and style attributes from reference images? and (b) can we do (a) with disentangled control of color and style?
We begin with a brief discussion on why recent training-free methods such as <cit.> do not achieve disentangled attribute transfer. This method proposed to capture the customized concept by transfering keys and values computed from the reference image. Given observations from prior work <cit.> the color attribute is captured during the initial denoising stages and style in the later steps, a natural way to repurpose <cit.> for our task is to restrict these key-value operations to specific timesteps depending on the attribute we seek to transfer. We show some results with this approach in Figure <ref>. As can be noted from these results, the attribute transfers are far from desirable. This is because color transfer using the key-value operations of <cit.> is limited by the quality of semantic correspondences between the reference image and the current generation. On the other hand, transferring style by simply limiting key-value copy to the last denoising timesteps is insufficient since by then features would have sufficiently entangled color and style information. Consequently, it is critical to disentangle these attributes in the feature space in a principled fashion to be able to achieve multi-attribute transfer at test time.
To address the aforementioned issues, we propose the first training-free method to enable disentangled control over color and style attributes from a reference image when generating new images. To extract and transfer the color attribute from a reference image, we propose to modify the latent codes during denoising using a novel correspondence-aware recoloring transformation. Our key intuition is there naturally exist color clusters in the reference image, and ensuring regions with the dominant color populations from reference correspond to regions in the image being generated will lead to meaningful color transfer. Given a color clustering of the reference image, we realize this by picking a certain denoising timestep, decoding the latent code, clustering the image, establishing a correspondences between the two sets of clusters, and performing a correspondence-aware whitening and recoloring feature transformation on the latent codes. At the end of the denoising process, the final latent code when decoded will give a new image with colors from the reference image. For instance, see first row/first column in Figure <ref> where the bird follows the colors of the shirt. Next, to transfer style and disentangle it from the color attribute, we propose two innovations. First, we observe that there exists an inherent disentanglement between these two attributes in the LAB color space where the L channel contains content and style information and the A/B channels the color information <cit.>. See Fig <ref> where we also show this qualitatively. In each row, we take the L channel from the image shown in the first column (after converting the RGB image shown into LAB), the A/B channels from the second column, and merge them, giving the result in the third column. In the first row, we see the style from the first column and the colors (blue) from the second column get captured in the resulting bird image. Similarly, in the third row, despite the second column having certain textured patterns, only the color (yellow) gets transferred to the resulting image. Next, noting that style mostly gets captured during the later denoising steps, we propose a time-step-constrained feature manipulation strategy. We do this by first generating an image with the baseline model and the desired prompt, and store the A/B channels. We then transfer style using the L channel from the reference image by aligning the self-attention key-value feature maps of the reference with those of the current generation, copy the A/B channels from the above operation and obtain the final result. See some results with our method in Figure <ref> where in each case, our method is able to respect both color and style references in the final outputs. For instance, the first row has a “a bird" image following the blue/yellow colors from the color reference and origami style, the second row has the bird in blue/yellow colors and watercolor style, and so on.
To summarize, our key contributions in this work are:
* We present the first training-free method to disentangle and control text-to-image diffusion models on color and style attributes from a reference image.
* We propose a new time-step-constrained latent code recoloring transformation that aligns the covariance matrices of a text-to-image model output with that a reference image, helping transfer reference colors to outputs of text-to-image models.
* We notice that the L channel in the LAB space has an inherent separation between style and color and propose a new time-step-constrained self-attention key and value feature manipulation algorithm to transfer style from a reference image.
§ RELATED WORK
With the wide adaptation of diffusion models for text-to-image synthesis <cit.>, much recent effort has been expended in controlling the outputs of these models. These efforts largely focus on learning adapters <cit.> given baseline text-to-image models, finetuning parameters of the base model<cit.>, learning new tokens in the vocabulary of these models <cit.>, introducing dedicated personalization encoders <cit.>, utilising LoRA <cit.> to encapsulate target information <cit.>, and inpainting-based approaches <cit.>. With the exception of MATTE <cit.>, none of these methods are able to achieve disentangled transfer between color and style of a reference image. However, MATTE <cit.> needs custom loss functions and hours of training per reference image, which is practically infeasible.
On the other hand, there is a new line of recent work that involves training-free approaches to customization <cit.>. However, these methods focus on one specific aspect (e.g., appearance transfer, style transfer, or subject consistency) and are not able to provide independent disentangled control over color and style attributes. Similarly, the non-diffusion based conventional style transfer methods AesPA-Net<cit.>, StyTR2<cit.>, and ArtFlow<cit.> target disentanglement of content and style, while style and color are not treated independently/separately in these works. We address these gaps in both training-based and training-free methods by proposing the first training-free method to provide disentangled control for text-to-image models over color and style attributes from reference images. Moreover, our proposed approach is not specific to any dataset, and can seamlessly adapt to various styles/content due to the base model's capability.
§ APPROACH
We start with a brief review of latent diffusion models (LDMs). LDMs comprise an encoder-decoder pair and a separately trained denoising diffusion probabilistic model (DDPM). Leveraging an encoder 𝐄, LDMs translate an image 𝐈 into a latent code 𝐳, perform iterative denoising, and subsequently convert the predicted latent codes back to the pixel space via the decoder 𝐃. The training objective of the DDPM ϵ_θ is the following: 𝔼_𝐳∼𝐄(𝐈),p,ϵ∼𝒩(0,1),t[ϵ - ϵ_θ^(t)(𝐳_t, 𝐋(p))]
where p denotes any external conditioning factor e.g., a text prompt, which is typically encoded using text encoder 𝐋 (e.g., CLIP <cit.>, T5 <cit.>). At any timestep t of the denoising process, given the current latent code z_t, the goal is to produce z_t-1. The first step here is to predict the noise ϵ_θ^(t)(𝐳_t, 𝐋(p)). Given z_t and ϵ_θ^(t)(𝐳_t, 𝐋(p)), deterministic DDIM <cit.> sampling gives z_t-1 as
z_t-1 = √(α_t-1) z_0 + x̂_t
where z_0 (denoised prediction) is predicted as z_0 = z_t-√(1-α̅_t)ϵ_θ^(t)/√(α̅_t), and x̂_t (direction pointing to x_t) is computed as x̂_t = √(1-α_t-1-σ_t^2)ϵ_θ^(t).
§.§ Disentangled Color and Style Conditioning
As discussed in Section <ref>, we seek to provide text-to-image models with disentangled control over and attributes extracted from user-supplied reference images. To do this, we propose a new training-free algorithm that facilitates any of color-only, style-only, or both color-style transfer from reference images. We achieve this with a two-branch architecture (see Figure <ref>), one each for color and style. As we discuss later, outputs from these branches can be used independently (for single attribute transfer) or can be merged seamlessly. This merging can happen with color and style from the same source (one reference image) or color from one image and style from another image (see Fig <ref> again where we show both single-source and two-source results).
Our closest training-free baselines <cit.> inject key and value feature maps from self-attention blocks of the U-Net from the reference image. However, this only helps transfer the overall appearance/identity and cannot control color and style. To fix this gap, we propose two ideas. First, given color information from a reference image, we propose to apply recoloring transformations on the intermediate latent codes which when decoded can give an image following reference colors. Next, we notice that (a) style information gets captured only during the later parts of the denoising process and (b) style is captured in the L channel when an image is converted from the RGB to the LAB space (see Section <ref> and Fig <ref> again). We exploit these two observations to propose a timestep-constrained key and value feature manipulation strategy in the L space where features during early denoising steps are retained as is from the baseline generation and those of later steps are carried over from the reference image. We next explain the details of these ideas.
Color conditioning.
Our proposed method is visually summarized in the branch in Figure <ref>. We first do a DDIM inversion step on the reference image to obtain the corresponding latent z_t^ref. Once the denoising process begins given a user-specified text prompt, given a latent z_t at timestep t, the DDIM sampling will compute the noise prediction ϵ_θ^(t), followed by computing the z_0 for both the reference image and the new generation. We then decode the latent code z_0 using the decoder 𝒟(.) (see Figure <ref> for an example where we visualize these for several intermediate denoising timesteps). Given a timestep t, we first perform a K-Means clustering operation on both the decoded image I_0^(t)_gen and the reference image to obtain sets of K color clusters 𝒞_gen and 𝒞_ref respectively. Note that one can mask the decoded latent with cross-attention maps to restrict the object of interest in both the new generation as well as the reference image. Given the cluster sets, we next establish correspondences between them based on their proportion, giving a set of masks ℳ_ref and ℳ_gen for each set. The idea here is that a color cluster with the largest membership in the reference image indicates the dominant color that we seek to transfer to the current generation.
Given the masks above, we achieve this by applying a mask-aware recoloring transformation (RT) on the latent code z_0^(t)_gen as:
z_0^(t)_gen = ∑_ m_gen^i ∈ℳ_gen
m_ref^i ∈ℳ_ref
1 ≤ i ≤ k{( 1-m_gen^i ) z_0^(t)_gen
+ m_gen^i [RT(m_gen^i z_0^(t)_gen, m_ref^i z_0^(t)_ref) ] }
Here, we iterate over all the K clusters and apply the recoloring transform separately to regions determined by masks corresponding to each cluster. In each iteration i, we use the corresponding mask m_gen^i to constrain the region where color transfer happens, and similarly m_ref^i determines the reference pixels from where the colors are picked. This way, we ensure that in any iteration i, pixels outside the region of interest (determined by the m_gen^i) remain untouched. The recoloring transformation <cit.> itself is a two-step process. We first whiten the latent codes to get its covariance matrix to be identity. We then apply a transformation to match the covariance matrix of the latent codes to match that of the reference image (z_0^(t)_ref). To ensure this operation strictly transfers color only and not style, we use observations from prior work <cit.> that note color is captured during the early parts of the denoising process. Consequently, we restrict Eq <ref> to only a subset of the initial denoising timesteps t_start^c>t>t_end^c.
The updated z_0^(t)_gen obtained in Equation <ref> is then used along with the predicted noise ϵ_θ^t to compute z_t-1^gen (using Equation <ref>) which goes as input to the next denoising step, eventually generating an image which follows the color from the reference image.
We show an example demonstrating the progression of decoded latents I_0^(t)_gen across denoising timesteps in Figure <ref>. Here given the prompt , one can observe that the model very initially starts forming some colors ( here). We transform the intermediate latents to manipulate these colors using the steps described above and obtain a bird following the blue cat in the color reference image shown in the figure.
Style conditioning. Given that high-frequency details like and shows up in the later denoising timesteps <cit.>, we begin by injecting key and value feature maps from the reference image to the current generation for only the last few (t > t^s_start) denoising timesteps. However, an issue with this approach is the feature maps in the later timesteps will have color information as well (since color would have been captured in the beginning), leading to entanglement of color and style. See the “Resulting image (style repurposed)" column in Fig <ref> for results with this approach- clearly both style and colors are getting transferred in this case, e.g., bird has both the watercolor style and the light pink colors from the “reference" image. To be able to disentangle style from color and allow independent control of the text-to-image diffusion model, our key insight is that there exists an inherent separation between style and color in the LAB space. The L channel captures the content and style, whereas the AB channels have color information (recall our discussion of Figure <ref> in Section <ref>). Since diffusion models are generally trained to operate in the RGB space, we take the grayscale version of the reference as an approximation to the L channel for all the next steps below (see Fig <ref> again for a visual summary). We begin by DDIM inverting the reference to get the latent z_t^ref. Given any user-specified text prompt (e.g., ), for each denoising timestep t < t^s_start, we denoise the input latent codes as in a baseline text-to-image model but once we hit t > t^s_start, we start injecting the self-attention key K and value V feature maps from the reference reconstruction after converting it to grayscale and DDIM inverting it as noted above. Formally, this modified self-attention feature map computation at any denoising timestep t and layer l of the U-Net can be expressed as:
f̂_t^l = 1_0 ≤ t < t^s_start softmax( Q_gen^l K_gen^l^T√(d_k) V_gen^l )
+ 1_t > t^s_start softmax( Q_gen^l K_ref^l^T√(d_k) V_ref^l )
where 1 is an indicator, and Q_gen^l K_gen^l V_gen^l and Q_ref^l K_ref^l V_ref^l denote l^th U-Net layer self-attention queries, keys, and values for the generation and reference respectively. We then take the final latent code, decode it to get an image, convert it to the LAB space, retain the L channel and get the AB channels from the corresponding color branch of Figure <ref>.
Note that t_end^c (=T/5) is strictly less than t>t_start^s (=4T/5) throughout (i.e. the timestep intervals for which color and style conditioning is applied have no overlap), where T denotes the total number denoising steps.
§ RESULTS
Qualitative Evaluation. In addition to the results in Figure <ref>, we show more results with our method in Figure <ref> (color and style reference in the first two columns and our results in columns three-four) to demonstrate disentangled transfer of color and style attributes from reference images. We show different combinations of control over color and style attributes. In the first row, we are able to generate images following the content from the prompt (, ) while following the style and color from the provided reference images.
In the last two rows, our method generates images following style or color from the reference while imposing no control over the other attribute (observe the straight and sharp edges in dog's outline in the last row).
We next compare our method with various baselines including training-free style transfer (Cross-Image <cit.>, StyleAlign <cit.>, FreeDoM <cit.>) in Figure <ref>, conventional diffusion-free style transfer baselines (AesPA-Net<cit.>, StyTR2<cit.>, and ArtFlow<cit.> in Figure <ref>), training-free color transfer (Cross-Image <cit.>, FreeDoM <cit.>) in Figure <ref>, and training-based techniques such as MATTE <cit.> and ProSpect <cit.> in Figure <ref>.
We begin by discussing disentangled style transfer results in Figure <ref>. One can note that our method is able to generate images (see last column) following the style from the reference image in a disentangled manner without affecting any other aspect of what the baseline generates (See Figure <ref> for zoomed in section of images for the first/second rows of origami style transfer). On the other hand, as expected, the Cross-Image baseline transfers the full appearance of the reference image. One can also notice missing regions in the bucket image in fourth row/fifth column due to lack of semantic correspondences. Similarly, the repurposed style version of cross-image baseline, while doing better, can not fully disentangle style from color as the features already have color information by the time style transfer happens during denoising. In StyleAlign, in addition to the style and color being entangled, the algorithm also transfers the structural/layout aspects of the reference image, thereby limiting the kind of control with style we seek over the final outputs (see fourth row/third column, where it tries generating bucket images following the layout of the cat from the reference image). Finally, FreeDoM also entangles color and style because its loss function does not account for any explicit disentanglement of these attributes.
We also compare with conventional style transfer methods in Figure <ref>. The baselines are able to preserve input content, but the transfer of style (e.g. van-gogh patterns in first row) is limited. Since these methods target disentanglement of content and style, style and color are not treated independently/separately, leading to undesirable results (e.g. see the first row/second-third columns where the result flower vase has blue/orange colors from the style reference whereas our method in the last column is able to fix this issue). Finally, while these methods are trained on specific datasets, our method is training free and not specific to any dataset, and can seamlessly adapt to various styles/content due to the base model's capability.
We next discuss disentangled color transfer results in Figure <ref>. In all cases, our method is able to correctly transfer the color from the reference image whereas the cross-image (original) baseline transfers the full appearance from the reference image and can not control the color attribute independently (see third/fourth rows where the generated birds have a cloth-like appearance). Similarly, the repurposed color version of cross-image baseline, while disentangling color and style, is unable to produce good results due to lack of correspondences.
With FreeDoM, in some cases there is overfitting to the colors during optimization while disregarding the overall aesthetics, whereas in several other cases, the generated images disregards reference colors (the birds in last row/second column) due to incorrect optimization. Our method is able to control and transfer color attribute independently without affecting any other aspects of what the pretrained model would have generated.
Finally, in Figure <ref>, we compare to recent training-based methods. In ProSpect <cit.>, one can see color and style are completely entangled (e.g., first column where both color and style are transferred). On the other hand, despite our method being completely training free, it performs at par with MATTE <cit.> which is a training-based approach (e.g., first column with our orange dogs). In the second column, whereas both MATTE and ProSpect entangle color and style, our method is able to generate van gogh-styled dogs without the bluish-orange colors.
Quantitative Evaluation. We next quantify improvements with our proposed method. We wish to evaluate how well these methods disentangle style and color (we follow the protocol from MATTE <cit.>), while also following the details specified as part of the text prompt. We keep either of the attributes (out of style/color) fixed from a reference image (we use the same set as in <cit.>) and vary the other (we use the list of 7 types, 13 types and 11 types from previous works <cit.>). In each case, we synthesize a set of 64 images and compute the average CLIP image-text similarity. A higher score indicates better disentanglement since both attributes would then be separately captured well in the output.
To further evaluate the quality of transfer of each attribute, we also compute similarity scores between the ground truth color/style (color obtained using ColorThief <cit.>) and the generated images. As can be seen from Table 1, our method outperforms all training-free baselines and allows for independent control over style and color attributes. Further, when compared to the training-based MATTE <cit.>, our method performs very competitively despite being training free.
Finally, we conduct a user study with the generated images where we show survey respondents a textual prompt followed by color and style references, and ask them to select the images (among sets from four different methods) that best follow the provided constraints. From Table 2, our method's results are preferred by a majority of users, providing additional evidence for effectiveness of the proposed approach.
§ SUMMARY
We considered the problem of disentangled color and style control of text-to-image models and noted none of the existing methods address this problem with a training-free approach. To this end, we proposed the first training-free, test-time-only solution with two key novelties: a timestep-constrained latent code recoloring transformation that aligned colors of generation outputs with reference colors and a timestep-constrained self-attention feature manipulation strategy in the L channel of the LAB space that aligned generation the style of generation outputs with that of the reference. This resulted in a flexible approach that can do color-only, style-only, or both color-style conditioning in a disentangled and indepedent fashion. Extensive qualitative and quantitative evaluations demonstrated the efficacy of our proposed method.
§
In Section <ref>, we validate our choice of using a grayscale version of the provided reference image compared to the colored reference image. In Section <ref>, we show additional results for disentangled conditioned on color and style attributes from a user-provided reference image. In Section <ref>, we show applicability of our proposed color conditioning in generating color variants of a given image. In Section <ref>, we show additional results for transfer of style attribute from a reference image. Finally, we conclude with some discussion on
limitations of our method in Section <ref>.
§.§ Style conditioning with colored reference
We experimented with taking the original (colorised) reference image instead of using its' grayscale counterpart. But we observed that the presence of color information in the reference image, leads to the intensity of colors being modified in the generated image, thereby leading to differences in lightness/darkness of some colors. We show some examples for the same in Figure <ref>. This further validates our choice of using a grayscale reference while transferring style.
§.§ Additional Qualitative Results
We show additional qualitative results in Figure <ref> to further demonstrate the efficacy of our method in generating images conditioned on either of color or style attribute independently, or both color and style in a joint manner. In the first row, we show results for transferring colors from both background (bg) and foreground (fg) of the reference image to localised regions like the badminton, vase, and the ball in the generated images respectively. In the second row, we show results for global transfer of style from the reference image and as one can clearly note, the van gogh styled patterns are clearly visible in the generated images. Finally, in the third row, we generate images conditioned on both color and style attribute (e.g. see third column/last row where the ball has blue color transferred and the image follows van gogh style).
§.§ Generating color variants
We show additional color transfer results in Figure <ref> while demonstrating its' utility in generating multiple color variants of a provided input image. We consider different colors (blue, green, red) and their combinations and show that our proposed method can effectively transfer color and all cases to the input image. For instance, note the first rows have the individual colors transferred for all of bird, cup toy car and ball. Similarly the rows 4-6 have combinations of two colors transferred, whereas the last row has all three colors (e.g. see first column where the generated bird has proportions of all three colors).
§.§ Additional Style transfer results
In Figure <ref>, we show additional results to further demonstrate the effective of our proposed approach in transferring style from reference images in a disentangled manner without having an impact on the colors of the generated image. For instance, consider the last column where the reference image has a repeated squarebox type pattern. One can clearly note that all our generated images as well show the same pattern while still maintaining the colors from the original image.
§.§ Limitations
In this Section, we briefly discuss a few limitations of the proposed approach. Firstly, as shown in Figure <ref>, the recoloring transforms in some cases lead to a loss of minute details e.g. bird's eye and details on the dog's face. Secondly, the masks obtained for the region of interest to be recolored are obtained using the cross-attention layers which, as shown in previous works <cit.> as well, do not always accurately localize the region. In such a case, one can use high quality off-the-shelf methods <cit.> to obtain segmentation masks for improved localised color and style transfer.
ieee_fullname
|
http://arxiv.org/abs/2409.02631v1 | 20240904115052 | Accurate calibration spectra for precision radial velocities -- Iodine absorption referenced by a laser frequency comb | [
"Ansgar Reiners",
"Michael Debus",
"Sebastian Schäfer",
"Eberhard Tiemann",
"Mathias Zechmeister"
] | astro-ph.IM | [
"astro-ph.IM"
] |
Iodine absorption referenced by a laser frequency comb
Institut für Astrophysik und Geophysik, Georg-August-Universität, Friedrich Hund Platz 1, 37077 Göttingen, Germany
Institut für Quantenoptik, Leibniz Universität Hannover, Welfengarten 1, 30167 Hannover, Germany
Astronomical spectrographs require calibration of their dispersion
relation, for which external sources like hollow-cathode lamps or
absorption-gas cells are useful. Laser frequency combs (LFCs) are often
regarded as ideal calibrators because they provide the highest accuracy and
dense sampling, but LFCs are facing operational challenges such as
generating blue visual light or tunable offset
frequencies.
As an example of an external source, we aim to provide a precise and accurate frequency solution for the
spectrum of molecular iodine absorption by referencing to an LFC that
does not cover the same frequency range.
We used a Fourier Transform Spectrometer (FTS) to produce a consistent
frequency scale for the combined spectrum from an iodine absorption cell at 5200–6200Å and an LFC at 8200 Å. We used 17,807 comb lines to
determine the FTS frequency offset and compared the calibrated iodine
spectrum to a synthetic spectrum computed from a molecular potential
model.
In a single scan, the frequency offset was determined from the comb
spectrum with an uncertainty of ∼1 cm s^-1. The distribution of
comb line frequencies is consistent with no deviation from linearity. The
iodine observation matches the model with an offset of smaller than the model
uncertainties of ∼1 m s^-1, which confirms that the FTS zero
point is valid outside the range covered by the LFC, and that the frequencies of the iodine
absorption model are accurate. We also report small
systematic effects regarding the iodine model's energy scale.
We conclude that Fourier Transform Spectrometry can transfer LFC accuracy
into frequency ranges not originally covered by the comb. This allows us
to assign accurate frequency scales to the spectra of customized
wavelength calibrators. The calibrators can be optimized for individual
spectrograph designs regarding resolution and spectral bandwidth, and
requirements on their long-term stability are relaxed because FTS
monitoring can be performed during operation. This provides flexibility
for the design and operation of calibration sources for high-precision
Doppler experiments.
Accurate calibration spectra for precision radial velocities
A. Reiners1
M. Debus1
S. Schäfer1
E. Tiemann2
M. Zechmeister1
September 9, 2024
=========================================================================================
§ INTRODUCTION
Precise and accurate measurements of frequencies (or wavelengths) in
astronomical spectra enable a range of fundamental physical
experiments. Frequency shifts occur through Doppler shifts allowing the
measurement of velocities, which can be used, for example, to measure the mass
of unseen extrasolar planets <cit.>, stellar pulsations
<cit.>, and velocity fields in atmospheres of stars and
planets <cit.>. Transition frequencies of spectral lines
are determined and affected by fundamental constants like the fine structure
constant and the proton/electron mass ratio <cit.>, by gravitational redshift <cit.>, and by the accelerated expansion of the
Universe, which affects galaxy motion on very large scales
<cit.>. These and other experiments can be carried out if
the frequency scale in spectra from astronomical objects can be accurately
determined.
Astronomical spectroscopy is photon starved. Large telescopes are required to
collect enough light from distant objects for astrophysical analysis, which is
different from typical laboratory setups where the intensities of the investigated
light can often be controlled. For precision Doppler measurements, high spectral resolution is favorable <cit.>. Échelle
spectrographs reach resolutions of R = λ / Δλ = 10^5 at
efficiencies on the order of 10 %. For comparison, a Fourier Transform
Spectrometer (FTS) can operate at higher resolution and provides a number of
advantages regarding frequency calibration, but delivers an efficiency that is
several orders of magnitude lower than in astronomical spectrographs
<cit.>.
In grating (échelle) spectrographs, light is collected in individual
detector pixels that are at minimum several hundred m s^-1 wide and are not
strictly evenly spaced <cit.>. This poses a fundamental
problem to frequency calibration because, in principle, each individual pixel
requires calibration through external information. Furthermore, astronomical
spectroscopy often requires a relatively large bandwidth —for example one
octave— because information is collected simultaneously from many individual
spectral features. Calibration sources must provide dense and accurate
spectral information across the full frequency range of a spectrograph. Useful
calibration light sources are, for example, hollow cathode lamps
<cit.>, absorption gas cells <cit.>,
Fabry-Pérot etalons <cit.>, and laser frequency combs
<cit.>. This combination of requirements,
and especially the large wavelength range, is a challenge for the calibration sources
—including LFCs— used for calibration in many high-precision
spectrographs <cit.>. One of the advantages of an LFC is
that the frequency scale of its spectrum is accurately known from fundamental
principles, and that it provides a dense population of narrow lines, which
renders it a conceptually ideal reference. This is in contrast to hollow
cathode lamps where the distribution of lines is uneven, leaving large areas of
the spectral domain uncovered, and where individual lines are typically not
known to better than about 10 m s^-1 <cit.>. FPs can alleviate part of this problem by delivering a
tailored comb of peaks over a large wavelength range, but our knowledge of peak
frequencies and their stability is limited, necessitating external calibration
<cit.>.
Gas-absorption cells are a spectroscopic standard for high-accuracy frequency
calibration, and are used, for example, in tunable laser applications
<cit.>. <cit.> introduced the use of
gas-absorption cells in astronomical observations, using hydrogen fluoride,
which was considered to be the most suitable available gas at the time. The
use of molecular iodine in astronomy dates back to observations of solar
Doppler shift measurements <cit.>. Early applications used molecular absorption lines as
a reference for differential line shifts between the stellar (solar) and gas
absorption spectrum to track drifts in the spectral format. This is similar to
the use of telluric absorption lines as standard, which was introduced by
<cit.>, where the main advantage is that stellar and
calibration light follow identical paths <cit.>, which also allows using gas
absorption lines for establishing a precise wavelength scale and specification
of the spectrograph instrumental line shape over the entire spectral range
<cit.>. For reference, a laboratory spectrum is used, which is
obtained with an FTS at a much higher resolution and signal-to-noise ratio
(S/N) than the échelle spectra.
Over the last half century, the field of precision Doppler experiments
has developed into an industry, with many new spectrographs at a variety of
facilities. So far, frequency calibration is typically limited at the
m s^-1 level
<cit.> or slightly better in individual targets
<cit.>. Calibration strategies generally fall
into two categories <cit.>, known as the iodine
cell technique (see above) and the simultaneous reference technique
<cit.>. In this work, we present a strategy for using an FTS
to establish the accurate frequency scale for any calibration
spectrum. This can be used to create calibration spectra optimized in shape
and coverage for astronomical spectrographs, and accurately referenced
across the entire spectral range. To demonstrate this, we employ a model of
molecular iodine absorption, and show that the model can be used either
instead of an observed template for the iodine cell technique, or as
a simultaneous reference if illuminated with a flatfield lamp.
§ METHODS
§.§ Fourier Transform Spectrometer
An FTS records an interferogram produced by a Michelson interferometer with
one movable mirror <cit.>. They are standard tools in laboratory
spectroscopy. Frequency calibration is achieved through a calibration laser
that provides a reference for the position of the movable mirror. In contrast
to grating spectrographs, the frequency scale in an FTS is, to very high
degree, linear in wavenumber because it is defined by interference phenomena
inherent to the instrument <cit.>. The only
free parameter in the frequency scale is the offset between the control laser
and the science light, which is typically known with an uncertainty of around
100 m s^-1 Doppler shift. This is in stark contrast to grating
spectrographs, where the frequency of every individual pixel comes with a
substantial uncertainty. In practice, however, the optical path difference
between control laser and science light in an FTS can depend on frequency,
and is caused for example by dispersion in the beam splitter. In the complex spectrum
reconstructed from the interferogram, this causes a phase shift that varies
with frequency and needs to be corrected for. Phase errors can cause
significant frequency offsets between different parts of the spectrum, which
is why empirical verification of frequency linearity is important. The
phase shift is expected to be a relatively slowly varying function of
frequency, which is why a small symmetric portion of the interferogram is
sufficient for phase correction <cit.>. We refer to
<cit.> and <cit.> for details about
phase correction.
Another advantage of Fourier Transform Spectrometry is that the observed
interferogram analytically defines the spectrum as a sum of continuous
trigonometric functions. In other words, the sampling of the interferogram
does not translate into a spectrum sampled at a finite number of pixels, but
the spectrum can (in principle) be computed at arbitrary positions from the
interferogram. This allows arbitrarily high sampling of the spectrum and a
clean definition of the instrumental line shape; while sparse sampling is a
limiting factor for the measurement of spectral lines in astronomical grating
spectra, this problem does not exist in Fourier Transform
Spectrometry. Therefore, the latter provides a testbed for high-accuracy line
profile measurements.
The FTS offset can be determined from a calibration standard in the observed
spectrum <cit.>; one of the main advantages of this approach
is that calibration features do not need to cover the same frequencies as the
science spectrum (from the Sun or other sources) because the interferometer
simultaneously receives information about the entire spectral range during a
scan <cit.>. We can therefore use one part of the
spectral range for calibration and another for the science spectrum.
Our setup is a commercial Bruker 125HR with a HeNe laser for reference. The
maximum optical path difference is 208 cm, of which 47 cm are symmetric
around the interferogram zero-point. We are using custom software for
computation of the spectrum, in particular for phase correction, which is
critical for our high-resolution spectra. We apply no apodization and we use
the Mertz method for phase correction <cit.>.
For this work, we used the VIS setup of our evacuated FTS covering the
spectral range 10,000–25,000 cm^-1 (4000–10,000 Å). For the
iodine-LFC measurements, we combined the light of the two sources using a
dichroic beamsplitter with a 6800 Å cutoff wavelength outside the FTS and
coupled the combined light into the FTS input fiber <cit.>. The input fiber is
hexagonal in shape, which ensures good near- and far-field scrambling of the two light
sources.
§.§ Laser frequency comb
The frequency spectrum of an LFC is a broadband comb of equidistant emission
lines, which can be stabilized with high accuracy and precision
<cit.>. The position of each line is
determined by two degrees of freedom: the repetition rate, f_rep, and the
carrier-envelope-offset frequency, f_CEO, governed by the relation
f_n, LFC = f_CEO + n f_rep,
in frequency units, or λ_n, LFC = c/f_n, LFC in
units of wavelength. Accuracy and precision are achieved by phase locking
f_rep and f_CEO to a stable reference oscillator, such
as an atomic clock. LFCs are important tools in metrology and, among many
other applications <cit.>, are promising calibration
light sources for astronomy
<cit.>. In the following, we will use
the expression "absolute calibrator" to indicate that the calibration is
traced back to the definition of second as much as the actual setup will
allow.
The LFC in our setup is a LaserQuantum (Novanta) taccor comb. The source laser
is a pulsed Ti:sapphire laser with a repetition rate of approximately
1 GHz. The commercial setup includes an f-2f interferometer for measuring the
offset frequency. Two frequency generators (Rohde & Schwarz SMB100A and
HMF2500) are used for locking f_rep and f_CEO, respectively. Our time
base reference is a GPS-8 from MenloSystems with a precision and accuracy
of better than 10^-12 in one second, which provides a 10 MHz signal for both
frequency generators. For the measurements in this work, the two degrees of
freedom of the LFC were stabilized to f_rep = 1.0019850000 GHz and
f_CEO= 377.4000000 MHz.
To generate a supercontinuum, we use a photonic crystal fiber stub of 14.5 mm
in length (NL-2.8-850-02) tapered by Vytran according to specifications retrieved
from simulations based on the approach of <cit.>. Our simulation
approach is designed to generate a stable supercontinuum, as detailed in
<cit.>.
§.§ Absorption cell with molecular I_2
Our iodine absorption cell setup consists of off-the-shelf components from
Thorlabs: The iodine cell is a GC19100-I, and the heater assembly is a GCH25-75
controlled with a TC200-EC unit. We use a fiber-coupled tungsten-halogen lamp
(HL-2000-HP-FHSA, OceanOptics) with OAP fiber couplers (RC08FC-P01, Thorlabs)
to guide light through the cell. The whole optical assembly is wrapped in
aluminum foil and placed in a styrofoam insulated box for additional
temperature stability. Typically, the temperature is stable to within 100 mK
on the timescale of one day.
§.§ Model of molecular iodine absorption
The model we use for molecular iodine absorption is based on a description of
the rovibronic structure of the I_2 B-X spectrum calculated from molecular
potentials for the two electronic states and their hyperfine parameters
informed by high-precision measurements of the B-X spectrum of I_2 in the
visible <cit.>. Depending on the temperature assumed in the
modeling, relative intensities of individual transitions are predicted. The
expected accuracy of the transition frequencies is better than 3 MHz
(∼2 m s^-1) in the wavelength range of 5260–6670 Å. From the line
list, we construct a model spectrum through broadening each individual
spectral line according to temperature Doppler broadening and the FTS
instrument profile.
For the measurement used in this work, we computed the line list according to
an I_2 temperature of T = 44^∘C. The model spectrum contains a total
number of 4,427,241 lines in the range of 5150–6300 Å. We included all lines
in our model regardless of their predicted absorption intensity. The I_2
spectrum contains on average between 30 and 100 lines per 1 km s^-1
Doppler width, which is 100–300 lines in one resolution element seen by a
typical astronomical spectrograph (R = 100,000). We scale all line
intensities from the model calculations by a factor of 3800 to approximately
match our observed spectrum.
§ OBSERVATIONAL DATA AND FREQUENCY CALIBRATION
§.§ LFC and I_2 spectra
We combined light from an LFC and an I_2 absorption cell and simultaneously
obtained their spectra in the same FTS scan. A dichroic beam splitter
separated the light such that the FTS received light at wavelengths of shorter
than 6800 Å only from the I_2 cell and at longer wavelengths only from
the LFC. We obtained 19 spectra with ten scans each on May 25, 2023; the total scan
time per spectrum was 22 minutes. One example spectrum with both components
is shown in Fig. <ref>.
The simultaneous observation of I_2 and the LFC spectrum allows a direct
comparison between the LFC line positions and the absolute I_2 line
frequencies. From the information about the LFC line position, we determined
the zero-point offset of the FTS frequency solution establishing an accurate
frequency scale for the I_2 spectrum.
§.§ Absolute frequency calibration from LFC spectrum
We aim to determine the zero-point of our combined I_2 and LFC spectra from
the region of the spectra containing the strongest LFC lines
(Fig. <ref>). We show a small portion of one LFC spectrum in
Fig. <ref> that we use as example for the 19 spectra
incorporated in the final analysis. The width of the individual lines is
determined by the maximum optical path difference, L. We used L = 136 cm
as a compromise between spectral resolution and scan time. We compare the
spectral region around the LFC peak n = 362,659 to an analytical model of the
instrumental line shape convolved with a model LFC peak pattern. The
instrumental line shape is a sinc-function with a width determined by the
maximum optical path difference. Additional broadening caused by the
finite-sized aperture is introduced by a convolution with a box function of
width 1/2L <cit.>. Assuming optimum aperture, we
estimated the effective resolution as the quadratic sum of the effects from
finite scan length, L, and corresponding aperture;
R = k/Δ k_L = √(2)Lk ≈ 2.3 · 10^6 at wavenumber
k = 12121 cm^-1 (λ = 8250 Å) and using
Δ k_L = 1/√(2)L. We did not attempt any shape optimization to
account for optical imperfections.
The observed spectrum computed from the interferogram is shown with
uncertainties estimated from the spectral noise determined in the spectral
range of 6600–6700 Å, which is almost completely free of spectral features. The
root mean square (rms) noise is σ_ rms = 0.012 in the flux units shown in
Fig. <ref>. The rms noise is independent of the pixel
sampling of our spectrum and represents the case of uncorrelated sampling only
if the number of pixels, N, is equal to the number of resolution elements,
N_L; that is, if the wavenumber stepsize is Δ k = Δ k_L, with
wavenumber k = 1/λ and λ the wavelength. To scale the noise
level to our oversampled spectrum, we computed the noise following the scaling
relation
σ^2 =
σ_ rms^2 · N/N_L =
σ_ rms^2 ·Δ k_L / Δ k =
σ_ rms^2 / √(2) LΔ k,
where Δ k is the stepsize in the spectrum used.
For the LFC spectrum in Fig. <ref>, absolute peak positions are
known from Eq. (<ref>), and peak intensity is adjusted manually to
visually match the observed spectrum; we did not attempt to perform a formal
fit here. The only additional free parameter of the observed spectrum is the
global zero-point offset.
The analytical description of the instrumentally broadened LFC spectrum
matches the (offset corrected) observed spectrum to a high degree. The
individual LFC peaks show a clear sinc pattern in which the sidelobes
partially overlap between the peaks, producing a periodic pattern that is
significantly different from noise. We refrain from a detailed analysis of the
FTS instrumental line shape but note a slight asymmetry between the depths of
the blue and red minima visible around the main peaks; for example, the red
minima are less deep and do not fully extend to the expected depth. For
comparison, we include in Fig. <ref> an instrumental line shape
according to a resolution of R = 100,000, the typical resolution of
astronomical high-resolution spectrographs (dashed line). Very often, in
astronomical spectra, such a resolution element is sampled with an element of
approximately 3 pixels per resolution. This underlines the potential of the
Fourier transform technique to offer an improved understanding of calibration
source characterization and line center fitting.
We determined the zero point (i.e., the offset to the control laser in the
FTS) of our observation, _0, from 17,807 individual LFC peaks in the
wavelength range of 8000–8400 Å. For every known LFC peak,
λ_n, LFC in Eq. (<ref>), we fit a sinc function to
the observed spectrum in a window of 250 m s^-1 in Doppler width around the
peak center, which approximately covers the main peak of the instrumental line
shape. Fit parameters were the amplitude of the LFC peak, the velocity offset
between the peak center observed in the spectrum (λ_n, FTS)
and its true position (λ_n, LFC),
_n = c (λ_n, FTS - λ_n,
LFC)/λ_n, LFC, and the width of the sinc
function. We ignored the additional (symmetric) broadening from the finite
aperture applied in the FTS because its main effect is an increase in the peak
width. The individual line positions were fit with an uncertainty depending on
peak flux; 69 % and
34 % of the lines showed uncertainties of below 1 m s^-1 and below 50 cm s^-1, respectively.
The distribution of individual LFC peak position measurements across the
spectral range is shown in the left panel of Fig. <ref>. Peak
position measurements are scattered around a mean value with a distribution
that is wider at regions of lower peak amplitude, which is consistent with the
assumption that lines with higher intensity are better determined. We fit a
linear slope to the velocity offsets, _n, to determine the zero point
of our I_2 and LFC spectrum, _0, and to search for a potential
slope in the wavelength solution, s, using the linear model
_n = _0 + s · (λ_n - λ_c),
with λ_c = 8200 Å.
In the spectrum shown in Fig. <ref>, we find a zero-point
offset of _0 = 23.945 ± 0.004 m s^-1, that is, the statistical
uncertainty of the mean wavelength accuracy of our spectrum is
σ__0 = 4 mm s^-1, which is representative of all 19
spectra. The slope we determine from the fit is
s = -0.49 ± 0.05 mm s^-1 Å^-1, which is a (formally)
statistically significant slope of around 20 cm s^-1 over the range of
400 Å. We experimented with different wavelength ranges, finding that the
value of the slope scatters around zero depending on the choice of range.
From this we conclude that the slope value is dominated by systematic rather
than statistical uncertainties and that our results are consistent with zero
slope or smaller than |s| = 1 mm s^-1 Å^-1 (approximately3·10^-8 lin. dispersion). This is in agreement with the results in
<cit.>, where we found the linear dispersion to be
below 10^-8 at wavelengths of 8000–9800 Å. Following the same argument,
we estimate that the uncertainty in the zero-point determination is
approximately 1 cm s^-1, which is slightly larger than the formal fit
result because of systematic effects.
To assess whether the fit positions were influenced by systematic effects, we
investigated the distribution of _n around _0, divided by their
fit uncertainty σ_n. This distribution is shown in the right panel of
Fig. <ref>. It is consistent with a Gaussian distribution with a
width that is only 5 % larger than expected from the uncertainties, which is
an indication of a realistic noise estimate. To search for
wavelength-dependent patterns in the offset, we overplot the weighted means in bins of
5 Å in width in the left panel of Fig. <ref> (gray
circles). These values scatter around the overall mean with a standard
deviation of 16 cm s^-1 without evidence for clear systematic uncertainties. From
this, we can rule out systematic patterns of several tens of angstroms in length and
exceeding a few 10 cm s^-1 in the wavelength range of 8000–8400 Å. We
note that hidden systematic uncertainties can be caused by the fact that we ignore the
additional finite-aperture broadening, imperfections of the instrumental line
shape, blends caused by the far wings of the instrumental line shape, or by
spectral features like water-absorption, which are not taken into account.
§ ANALYSIS OF MODEL I_2 ABSORPTION SPECTRUM
§.§ Absolute frequencies
With the zero-point determination, we established an accurate frequency
solution for our combined I_2 and LFC spectra, which means that the
frequency solutions of our iodine spectra were calibrated with an overall
uncertainty on the cm s^-1 level. We compared our observed spectra to
synthetic spectra based on the model of <cit.> and determined
remaining Doppler velocity offsets between model and observations,
ΔRV. To see whether the offset showed any dependence on frequency, we
performed a fit of the I_2 model spectrum to our observations in 302
spectral chunks of 200 km s^-1 width across the wavelength range of
5150–6300 Å. The chunk size is a compromise between velocity uncertainty
per chunk and the spectral resolution of our analysis. We computed the fits for
each spectrum after zero-point correction with the LFC and averaged the
results from 19 spectra. For each chunk, fit parameters were the Doppler
velocity offset between the model and the observed spectrum, a scaling
parameter for the I_2 line absorption intensity, and a linear slope for
continuum normalization. Uncertainties of the offsets for each chunk per
single spectrum, as calculated from the spectrum noise, were below
2 m s^-1 for 32 % and below 4 m s^-1 for 90 % of the
19 · 302 = 5738 individual computations. After averaging over the 19
exposures, the median uncertainty per chunk is 0.54 m s^-1 with 93 %
of the values below 1 m s^-1. This allows us to investigate the accuracy
of the model's absolute frequency scale and its dependence on wavelength.
The absolute Doppler velocity offsets, ΔRV, between the I_2 model
spectrum and our observations are shown in Fig. <ref>. The model
frequencies are centered around zero Doppler offset: the frequencies of the
LFC-corrected I_2 spectra accurately coincide with the model
predictions. Specifically, Doppler offsets are distributed within the
estimated frequency uncertainty pattern <cit.>, which is
indicated as gray dashed lines in the top panel of Fig. <ref>.
We therefore conclude that the I_2 model accurately predicts the I_2
absorption line frequencies, and that the model frequencies are useful for an
absolute calibration of astronomical spectrographs.
We allow a scaling of the absorption line intensity because we suspect that
the calculated transition intensities applying the Franck-Condon principle
show deviations from observations that depend on frequency. The middle panel
of Fig. <ref> shows that the intensities of the vibronic
transitions indeed deviate by more than 10 % from the average. We confirmed
that the systematic variation of the transition frequencies shown in the top
panel of Fig. <ref> are independent of line intensity by
carrying out the same analysis but keeping line intensity constant over all
frequencies. This clearly indicates that improvements in the modeling of the
spectra should mainly focus on the energy scale (potential functions and
hyperfine interaction) and not the intensity (relaxing the Franck-Condon
principle).
The pattern in ΔRV consists of one long-period wave overlaid by a
rapidly oscillating pattern that coincindes with the I_2 absorption band
structure. Uncertainties in ΔRV for each chunk are shown in
Fig. <ref> and demonstrate that the long-period wave and also
parts of the oscillating pattern are significantly different from random
noise. The amplitude of the long-period wave is approximately 2 m s^-1,
which significantly exceeds the frequency linearity determined from the
LFC lines in Fig. <ref>. While the LFC lines cover a
different frequency range, we see no obvious reason why the linearity should
be very different in the frequency range used here. A potential source of
systematic errors in determinations of radial velocity is the phase correction. We show the
reconstructed phase for one of our spectra in Appendix <ref>, in
which we find no evidence for a systematic pattern resembling the ΔRV
signature. To verify the robustness of the ΔRV pattern, we computed
ΔRV using the power spectrum (instead of the phase-corrected spectrum)
and found that the ΔRV pattern also appears. This demonstrates that
phase errors are an unlikely source of the ΔRV pattern.
We therefore believe that the pattern of Doppler offsets is dominated by
systematic shifts in the model frequencies that vary through the rotational
bands of the I_2 B-X spectrum. We suggest that our observations be used to
improve the Doppler offset in the model spectrum for this systematic
effect. To visualize and interpolate the offset pattern, we show a spline fit
to the velocity offsets after applying a smoothing (red line in top panel of
Fig. <ref>). The spline represents the effective frequency
correction required to adjust the model spectrum. This correction provides a
frequency solution that is accurate to within 0.5–1 m s^-1 across
the range of 5300–6150 Å. We emphasize that this value does not correspond to
the average performance for the full wavelength range but applies to every
chunk fitted in the spectrum.
§.§ Comparison to the high-S/N observation
To complement our comparison between the I_2 model and observations, we
obtained a deep spectrum of I_2 absorption with our FTS. We added 227
individual observations with ten scans each taken Jan 20–24, 2023. Here, we
used the symmetric configuration of our FTS with a maximum optical path
difference of L = 45 cm, applying forward-backward scanning. Each scan took
approximately 24 min, with a total scan time of 92 h. Co-addition was performed
without individual zero-point correction because typical shifts of a few
m s^-1 between consecutive observations are not relevant for our visual
comparison. We show the spectrum in the wavelength range of 5502–5504 Å in
Fig. <ref> together with a model for T = 44^∘C. The S/N
of the co-added spectrum in this wavelength range is approximately 2400 at a
resolution R = 1.2 · 10^6. The model provides an excellent fit even at
this resolution, reproducing essentially all I_2 lines. We particularly
emphasize the quality at regions where strong lines overlap, which is a good
diagnostic of line depths and spectral resolution. We also note that the model
reveals some areas with very few lines where an almost clean continuum is
visible. In these regions, the data show a ripple pattern that we believe is
not caused by noise but by the sinc function instrumental line shape. Our
model includes the instrumental line shape but we did not attempt to
empirically determine the line shape, nor did we search for any missing lines
or lines underestimated in the model spectrum.
§ DISCUSSION
§.§ Absolute cross calibration with an FTS
The linearity of the FTS frequency scale allowed us to project the
frequency accuracy of the LFC onto the wavelength range of the I_2 spectrum. The
simultaneous (dichroic) observation of LFC and I_2 spectra at different
wavelengths provides I_2 spectra on a frequency scale that is linear and
accurate with an offset uncertainty of smaller than
Δ_0 = 1 cm s^-1 (Δ_0 / c = 3 ·
10^-11). In general, an FTS can project the accuracy of an absolute
frequency standard, for example, an LFC or an I_2 spectrum, into wavelength ranges
not originally covered by the frequency standard. The projection of frequency
accuracy can be carried out on the spectrum from any light source, in
particular wavelength reference spectra optimized for astronomical
spectrographs, such as an FP.
Such reference spectra can then be used to calibrate astronomical
spectrographs in close analogy to the strategy followed when employing an LFC
but without the need for the LFC light to cover the entire wavelength range or
enter the spectrograph. This significantly relaxes requirements on bandwidth,
free spectral range, and peak uniformity (albeit peak stability should not
vary strongly on timescales of the FTS scan). For example, the comb teeth of a
1 GHz LFC as used in our setup can be adequately distinguished at 8500 Åby an FTS with a maximum pathf olarger than 30 cm, and the
resolution of such an FTS at 6000 Å, R = 10^6, is sufficient for the
characterization of the reference spectrum.
Furthermore, all calibration sources can be characterized at spectral
resolutions far exceeding that of the astronomical spectrograph, and
can be monitored for variability. This can lift the paradoxical situation whereby
spectra from calibration sources are never seen with any other instrument than
the one being calibrated.
§.§ Calibration concept for astronomical spectrographs
In practice, high-resolution spectrographs are calibrated using a suite of
calibration sources. Spectra of an LFC or hollow-cathode lamps are taken once
or a few times per day providing information about the absolute positions of
wavelengths on the detector. In addition, a stable FP is often used to
interpolate between hollow-cathode lamp lines <cit.>, or extrapolate to wavelengths not covered by the other
sources. Some observatories also use the FP simultaneously during science
observations to track the spectrograph drift.
We suggest that an FTS can be used to employ any light source in order to provide
accurate information about the wavelength solution
(Fig. <ref>). In practice, one would use an absolute standard
—such as an LFC— to calibrate the FTS in a limited spectral range outside
the range covered by the astronomical spectrograph (e.g., 8000–8400 Å).
During each calibration exposure of the astronomical spectrograph, a fraction
of the light from the calibration source needs to be channeled into the FTS
where a high-resolution spectrum with an accurate frequency solution is
obtained in the full spectral range relevant for the astronomical
observations, such as 3800–8000 Å. Thus, for every individual exposure taken
with the astronomical spectrograph, the FTS provides an accurate frequency
solution for any of the calibration sources.
The availability of accurate spectra for each calibration exposure massively
relaxes requirements on calibration source stability. For example, with this
strategy, an FP only needs to be stable for the duration of the observation
because variability becomes visible at the significantly higher spectral
resolution of the FTS. We argue that the full characterization of reference
spectra during the time of each observation removes critical free parameters
during the calibration process that are otherwise not accessible —such as variability in line strengths from hollow cathode lamps or an LFC— and
that the opportunity to use any type of calibration source can lead to
superior calibration strategies and reliability. For example, a tunable FP
could be used to iteratively cover all detector pixels in the astronomical
spectrograph if accurate calibration information is available.
For calibration of the FTS offset, an LFC is a viable choice. Alternatively, a
stabilized laser could be sufficient, and we demonstrate in the present paper that
I_2 also provides offset accuracy at the sub-m s^-1 level. The actual
choice of calibration source and strategy depends on the individual setup
but is very flexible. In our solar observatory in Göttingen, we are
obtaining spectra of the Sun in the range of 4000–6800 Å with our FTS. The
FTS could be calibrated with the LFC but we avoid using the latter for
everyday observations for practical reasons. Instead, we calibrate the
instrument with simultaneous FP measurements in the wavelength range of
6800–9000 Å. The FP itself is calibrated every day using a simultaneous
observation of the FP with I_2 at 4000–6800 Å<cit.>.
The design of a full calibration plan exceeds the scope of this paper and
depends on the actual setup and requirements of the astronomical
spectrograph. It is probably realistic to cover the full wavelength range of
astronomical spectrographs with one FP and use the FTS spectra to provide
sub-m s^-1 accuracy. A tunable FP can further improve the wavelength
solution while high-finesse FPs could be used to provide narrow emission
lines useful for characterizing the instrumental profile over the entire
frequency range. The performance of this critical step with respect to an LFC
remains to be tested. With the relaxed requirements on absolute calibration
and the flexible choice of dichroic beamsplitters, technical solutions are
available for a wide range of applications, including visual and infrared
spectrographs.
§.§ Iodine as an absolute calibrator
We demonstrate in Section <ref> that spectra computed from
the molecular potential model from <cit.> describe observations
taken with an I_2 absorption cell to a high degree, and that the modeled and
corrected frequencies are accurate to a level of below 1 m s^-1. Thus,
iodine cells are useful absolute calibrators, and it is possible to obtain
their spectra and wavelength information from (1) FTS measurements or (2) I_2
model spectra.
For calibrating astronomical spectrographs, an iodine cell, illuminated by a
flatfield lamp, can provide an economic solution in the wavelength range of
5200–6300 Å. This can be useful in addition to other calibration sources, such as hollow cathode lamps and Fabry-Pérot etalons, which are affordable for
small-budget observatories. At observatories that include LFCs in their
calibration plan, an iodine cell can provide an additional calibration
source. Its spectrum more closely resembles stellar absorption spectra and is
therefore useful for investigating the impact of potential differences in the
trace profiles between emission (LFC/Fabry-Pérot) and absorption spectra
<cit.>. Most importantly, an iodine cell can be operated in
the telescope beam, while observing bright stars for example. Thus, accurate iodine
cell spectra are useful for verifying Doppler offsets potentially caused from
using different light paths for calibration and science light.
Applications involving I_2 absorption cell spectra usually rely on reference
spectra obtained with an FTS at a significantly higher resolution than used in
the astronomical data. Accurate model spectra could be used instead, with the advantage of this being that model spectra are absolutely noise free and
can be computed at arbitrary spectral resolution and cell temperature. For
example, the model can allow the cell temperature and line
intensity (or partial pressure) to be fitted, which can potentially help to reduce
requirements on temperature stability and I_2 condensation issues. We would
always strongly recommend to obtain high-quality reference FTS spectra for any
iodine cell used for astronomical spectroscopy. Nevertheless, the flexibility
offered by a model spectrum, in addition to the FTS scan, can hardly be
overrated as long as the model sufficiently matches the real spectrum.
Our results could also have a very important impact on molecular spectroscopy.
To the best of our knowledge, such a complete simulation of a molecular
spectrum in connection with an observation has never been achieved over a range of
5200 to 6200 Å with convincing consistency. Our approach can easily be adapted to
the modeling of observed molecular spectra.
§ SUMMARY
We obtained simultaneous observations of an LFC and an I_2 absorption cell
at different wavelengths in an FTS. LFC lines in the wavelength range of
8000–8400 Å were used to test the consistency and linearity of the FTS
frequency solution, and to determine offset and dispersion. The dispersion is
found to be below 1 m s^-1 per 1000 Å, and the offset is determined with
an accuracy of 1 cm s^-1. Comparison between the observed spectrum and an
analytic model of the instrumental line shape demonstrates exquisite spectral
quality at a resolution some 20 times higher than astronomical high-resolution
spectra. This allows the development and study of spectral analysis algorithms at
the sub-m s^-1 level without systematic limitations related to spectral
quality.
All individual spectra were zero-point corrected with the information from the
LFC lines, and the I_2 wavelength range of 5150–6300 Å was used to compare
observations against model I_2 spectra. The comparison was carried out in
spectral chunks of 200 km s^-1 width to search for systematic
variability in the model line frequencies. The offsets show a characteristic
pattern that we attribute to the I_2 band structure. The pattern is
centered around zero velocity and is consistent with the absolute frequency
uncertainty estimated in the model (2 m s^-1). This means that the
model spectra are useful for providing an absolute frequency scale in I_2
broadband spectra, for example when illuminating an iodine cell with a flatfield
lamp or starlight. We argue that the systematic frequency pattern can be
corrected from our comparison between model and FTS observations, and that the
final frequency scale is accurate within an uncertainty of
1 m s^-1 across the wavelength range of 5200–6400 Å. The high
consistency between model and observations opens the opportunity to use the
high flexibility of model spectra for analysis involving I_2 absorption
lines.
The high accuracy demonstrated in I_2 and LFC spectrometry shows the
potential of using an FTS for calibrating astronomical spectrographs. The
possibility of obtaining a referenced spectrum from any light source with an
FTS allows the spectra of any calibration source to be obtained with an absolute
frequency scale known with an uncertainty of better than 10 cm s^-1. We
argue that simultaneous FTS monitoring of calibration sources alleviates the need
for an absolute calibrator to cover the entire frequency range. The FTS can
extend the frequency accuracy from a small wavelength portion over its entire
range. This also allows great flexibility in the design of calibration
sources, which will help improve the performance of calibration strategies
for the next generation of high-precision Doppler experiments in astronomy.
We regret to notify that Horst Knöckel passed away very recently July
2024. He was the main contributor over many years on the spectroscopic work
and development of the iodine model, on which our present work is based. He
would be delighted to see these fruits of his work. We acknowledge the help
of J. Dabrunz with the extraction of line lists from the
program, and we thank the anonymous referee for a
very helpful report. We thank T. Schmidt, F. Kerber, L. Pasquini,
P. Huke, and members of the ELT Working Group Line Calibration for
discussions about results and applications. M. Debus was funded through the
Bundesministerium für Bildung und Forschung (ELT-ANDES, 05A2023).
aa
§ PHASE CORRECTION
In Fourier Transform Spectrometry, the spectrum is computed through Fourier
transform of the measured interferogram. In theory, the interferogram is
symmetric around zero mirror displacement, and Fourier transform of the real
and symmetric interferogram leads to a real spectrum with no imaginary part,
S(k). In practice, however, the interferogram contains noise, its zero
point can not be exactly determined and is a function of wavelength, which
all leads to a spectrum with complex components,
C(k) = R(k) + i I(k).
This phase error can be corrected under the assumption that the
spectrum is real (the imaginary part is zero). The dependence between C(k)
and S(k) can be represented by multiplication with a phase that depends on
wavenumber, k,
C(k) = S(k) exp(i Φ(k)).
With C(k) computed from the measured interferogram, one way to reconstruct
S(k) is multiplicative phase correction, an algorithm that follows the
strategy developed by <cit.>. With
Φ(k) = arctan(I(k)/R(k)),
we can write
S(k) = C(k) exp(-i Φ(i))
= R(k) cos(Φ(k)) + I(k) sin(Φ(k)),
which is the phase-corrected, real spectrum. We refer to <cit.>
and <cit.> for more detailed descriptions of phase
correction. Wrong phase correction can lead to a significant deformation of
spectral features and, in particular, to apparent Doppler shifts
proportional to the phase error.
For one of our spectra containing I_2 and the LFC signal (as in
Fig. <ref>), we show the phase in
Fig. <ref>. The phase is derived from the data points with
highest intensity. For small portions of the spectrum, we select the 20 %
of the points that show the highest intensity (black points in top panel of
Fig. <ref>), and fit a spline curve through their phase, which is
shown as red curve in the bottom panel of Fig. <ref>.
For phase correction, we used the 47 cm long symmetric part of the FTS. We
estimated that at a systematic displacement of 1 m s^-1 in spectral
features could be caused by a phase error of approximately 0.01 rad
<cit.>. A pattern as the one shown in
Fig. <ref> could be caused by a similar pattern in phase with
such an amplitude. The data we used for phase determination is distributed
symmetrically around the smoothed curve with a 1-σ width of
approximately 0.01 rad, and the statistical uncertainty of the phase per
10 Å bin is ∼ 0.001 rad or 10 cm s^-1. From this we can
conclude that a phase pattern in frequency of 1 m s^-1 amplitude would
be detectable in our data. As a consistency check, we tested our RV
determination using the power spectrum instead of the phase-corrected
spectrum. We found the same ΔRV-pattern in this analysis, which
confirms that the pattern is stable against errors in the phase
correction. In conclusion, we believe that our analysis is robust against
systematic errors on the scale of about 10 cm s^-1.
|
http://arxiv.org/abs/2409.02859v1 | 20240904163645 | Theoretical modelling of the exceptional GRB 221009A afterglow | [
"L. Foffano",
"M. Tavani",
"G. Piano"
] | astro-ph.HE | [
"astro-ph.HE"
] |
Luca Foffano
[email protected]
0000-0002-0709-9707]Luca Foffano
INAF-IAPS Roma, via del Fosso del Cavaliere 100, I-00133 Roma, Italy
0000-0003-2893-1459]Marco Tavani
INAF-IAPS Roma, via del Fosso del Cavaliere 100, I-00133 Roma, Italy
Dip. di Fisica, Università di Roma Tor Vergata, via della Ricerca Scientifica 1, I-00133 Roma, Italy
0000-0002-9332-5319]Giovanni Piano
INAF-IAPS Roma, via del Fosso del Cavaliere 100, I-00133 Roma, Italy
July 24^th, 2024
August 22^nd, 2024
August 31^st, 2024
§ ABSTRACT
The extraordinary gamma-ray burst GRB 221009A provides a great opportunity to investigate the enigmatic origin and evolution of GRBs. However, the complexity of the observations associated with this GRB provides significant challenges to develop a theoretical modeling in a coherent framework.
In this paper, we present a theoretical interpretation of the GRB 221009A afterglow within the relativistic fireball scenario, aiming to describe the broad-band dataset with a consistent model evolution. We find that the adiabatic fireball evolution in the slow-cooling regime provides a viable scenario in good agreement with observations.
Crucial to our analysis is the set of simultaneous GeV and TeV gamma-ray data obtained by AGILE and LHAASO during the early afterglow phases.
Having successfully modelled as inverse Compton emission the high-energy spectral and lightcurve properties of the afterglow up to 10^4 s, we extend our model to later times when also optical and data are available. This approach results in a coherent physical framework that successfully describes all observed properties of the afterglow up to very late times, approximately 10^6 s. Our model requires time-variable microphysical parameters, with a moderately increasing efficiency ε_e of a few percent for transferring the shock energy to radiating particles, and a decreasing efficiency for magnetic field generation ε_B in the range 10^-5 to 10^-7. Fitting the detailed multi-frequency spectral data across the afterglow provides a unique test of our model.
§ INTRODUCTION
GRB 221009A was the brightest gamma-ray burst (GRB) ever detected. Its origin is connected with the core collapse of a massive star
<cit.> at redshift z = 0.15095 ± 0.00005 <cit.>. With an unprecedented brightness and duration, GRB 221009A offers an exceptional opportunity to investigate the physical mechanisms driving such powerful explosions.
On October 9^th 2022, the transient event Swift J1913.1+1946 <cit.> was then identified as GRB 221009A <cit.> and associated with the reference Fermi-GBM trigger time T_0,GBM=13:16:59.99 UT <cit.>.
Several instruments captured the event during its prompt emission, including Konus-Wind <cit.> and Fermi-GBM <cit.>. The afterglow was systematically observed and monitored in the following days <cit.>.
The AGILE satellite <cit.> detected GRB 221009A during its most important phases, providing valuable information in the MeV and GeV energy range with the MCAL and GRID instruments <cit.>. TeV gamma rays were also reported during the initial phases of GRB 221009A by the LHAASO observatory <cit.>. The firm simultaneous detection at MeV, GeV and TeV gamma rays - together with data at other energies - provides crucial information for the modeling of this powerful event.
In this paper we focus on the GRB 221009A afterglow, omitting a detailed study of the early-phase prompt emission which is beyond the scope of our investigation.
In Section <ref> we briefly summarize the multi-wavelength datasets adopted in this analysis and focus on the simultaneous data collected by AGILE and LHAASO.
In Section <ref>, we describe the details of the theoretical model. Then, Section <ref> and Section <ref> are devoted to present the comparison between the model and the multi-band spectral and flux intensity data. Finally, in Section <ref> we briefly discuss the physical results of our analysis.
§ MULTI-WAVELENGTH DATA
Our goal is to properly consider all relevant information regarding the flux and spectral evolution of the multi-band datasets of the GRB 221009A afterglow. In <Ref> we show the lightcurves
from different instruments and energy bands: TeV gamma-ray data from the LHAASO observatory <cit.>, GeV gamma-ray data by AGILE-GRID (described in the next section), X-ray data provided by Swift-XRT <cit.>, and optical data[In this analysis, we do not report the Swift-UVOT data, as their flux is in contrast with the collimation-corrected optical data. We attribute this effect to a strong galactic absorption, typical for this energy band.] obtained with Pan-STARRS and other imaging facilities by <cit.>.
In the late-time spectra we include the hard X-ray data provided by the BAT instrument onboard the Swift telescope <cit.>.
Concerning the TeV LHAASO spectral data[We do not show the spectra up to 10 TeV obtained by LHAASO in <cit.>.
However, they substantially support our scenario.] shown in Figures <ref> and <ref>, we show the observed data and the data de-absorbed for the extragalactic background light (EBL).
Since a minor X-ray precursor of GRB 221009A was detected at the nominal trigger time T_0, GBM (which turned out to occur at a much earlier time than that of the main GRB episode), it is convenient to define the time T^* = T_0, GBM + 226 s and adopt in our analysis the renormalized time
t' = t - T^* <cit.>.
From now on, we use t', if not differently indicated.
In <Ref> we define the different afterglow phases and the relevant time intervals
T_-1, T_0, T_1, T_2, T_3, T_4 as defined in <Ref>.
In our calculations we assume cosmological parameters describing a flat Universe
with Ω_M = 0.3, Ω_Λ =
0.7 and H_0 = 70 km s^-1 Mpc^-1.
§.§ AGILE data
We performed a specific analysis of AGILE-GRID GeV data strictly simultaneous with LHAASO TeV data. The procedure is analogous to that presented in <cit.>, and was performed by dividing the AGILE observations into time intervals consistent with those reported in <cit.>.
The data analysis takes into account only the effective GRID exposure given the fact that the instrument was exposed to the GRB discontinuously due to the AGILE telescope's spinning.
The GRID spectra - shown in Figures <ref>, <ref>, and <ref> - were obtained between 50 MeV up to the maximum energies allowed by photon statistics.
The lightcurve - shown in Figures <ref> and <ref> - was generated in the energy range 50 MeV - 3 GeV. In each time-bin, the energy flux was computed by assuming the specific power-law photon index of the interval, whenever the photon statistics allowed a proper spectral analysis (otherwise adopting a photon index of 2). All details are reported in <Ref>.
§ THE RELATIVISTIC FIREBALL MODEL
The relativistic fireball model provides a theoretical framework to study the afterglow emission of a blast wave expanding in an external environment <cit.>. This interaction develops a forward shock propagating outwardly and a reverse shock propagating into the shell.
The forward shock front expands spherically with an initial bulk Lorentz factor Γ_0, following a hydrodynamic evolution Γ(t) with analytic solutions in the fully adiabatic or radiative scenarios of the expansion <cit.>. The relation between time and radial distance r in the observer's frame is given by r = 4 Γ^2 c t <cit.>.
The shock front propagates into the surrounding inter-stellar medium (ISM) with a density profile in the rest frame n(r) = n_0 r^-s, with s = 0 for a homogeneous environment, and s = 2
for a surrounding medium determined by a massive star wind density profile.
In the fully adiabatic hydrodynamic evolution, these two scenarios cause the Lorentz factor to change over the radial distance (and time) as Γ(r) ∝ r^-3/2 in the homogeneous scenario and as Γ(r) ∝ r^-1/2 in the wind-like case.
As the blast wave decelerates, fractions of the shock energy are transferred on a short timescale to the magnetic field and to the random kinetic electron energy through the quantities ε_B and ε_e, respectively. Given their importance in our analysis, it is worth to briefly remind the physical definitions of ε_B and ε_e that are ultimately determined by the energy density U_sh in the forward shock. The initial kinetic energy of the blast wave E_k is carried mostly by the protons.
The shock energy density in the bulk frame can be represented as U_sh = 4Γ^2 n_p1 m_p c^2, where n_p1 is the rest-frame upstream proton number density, m_p the proton mass, and we have adopted a shock compression ratio of 4 Γ. The magnetic field B in the comoving forward shock frame is obtained from the relation U_B = ε_B U_sh, with U_B = B^2/8π the magnetic field energy density.
We therefore have,
4Γ^2 n_p1 m_p c^2 ε_B = B^2/8 π ,
and then B = Γ c √(32 π n_p1 m_p ε_B).
Electrons and positrons absorb a fraction ε_e of the comoving energy density U_sh, obtaining the electron energy density U_e = U_sh ε_e.
They are accelerated and a power-law energy distribution dN (γ) / dγ = κ γ^-p is established on a timescale shorter than the dynamical timescale, with p the power-law index and κ the normalization factor. The electrons' Lorentz factor γ ranges from a minimum value γ_min to a maximum value γ_max. Their energy density becomes U_e = κ m_e c^2 ∫_γ_min^γ_maxγ dN/dγ dγ.
Assuming that γ_max≫γ_min, we get:
Γ n_p2 m_p c^2 ε_e = n_e2 m_e c^2 (p-1)/(p-2)γ_min ,
where n_p2 = 4 Γ n_p1 and n_e2 are the downstream proton and electron number densities in the rest frame.
It is commonly assumed that n_p2 = n_e2 and that all the complexity of the particle acceleration process gets absorbed into the quantity ε_e, leading to the relation γ_min = p - 2/p - 1 m_p/m_e ε_e Γ <cit.>.
We anticipate that, unlike in most GRB models, in the case of GRB 221009A both quantities ε_B and ε_e are required to vary with time as a consequence of the fundamental physical processes determining the particle energy evolution of a very complex and long event.
As the fireball expands, the accelerated electrons and positrons radiate their energy by synchrotron and inverse Compton emission through the Synchrotron Self-Compton process <cit.>.
The relation between γ_min and the cooling Lorentz factor γ_c = 6 π m_e c/σ_T Γ B^2 t is crucial, and defines two distinct physical regimes. When γ_min > γ_c, particles are in a fast-cooling regime efficiently losing their energy through synchrotron cooling within a dynamical time. Conversely, when γ_min < γ_c, particles are in a slow-cooling regime, and only particles with γ > γ_c cool efficiently.
In our model, we account for internal γγ absorption and cosmological effects. We also include corrections for Klein-Nishina scattering <cit.>, though these are mostly negligible for the modelling of this event. Additionally, we consider the absorption effects due to interactions with the EBL, adopting the model by <cit.>.
We include the cooling effect due to the inverse Compton process, which shortens the electrons' cooling time. The previously defined cooling Lorentz factor - now named γ_c,syn - is then modified as γ_c = γ_c,syn / (1+Y), where Y is the Compton parameter computed following <cit.>.
§.§ Study cases
In our study, we have investigated several alternative scenarios. We considered both a radiative and an adiabatic evolution, the latter both within a fast- and a slow-cooling regime.
Applications of these scenarios to the precise modelling of the multi-band spectral and intensity data of GRB 221009A turned out to be a quite challenging task, with model dependent outcomes that often resulted in contradiction with the data.
In this paper we restrict our analysis to a successful scenario that explains both the GeV-TeV spectral and intensity evolution of the early afterglow as well as the X-ray and optical evolution of the late afterglow. We find that a fully adiabatic evolution of the fireball in a homogeneous medium with non-constant values of ε_e and ε_B moderately changing with time is remarkably successful in explaining the overall afterglow within a single evolution scenario.
§ SPECTRAL ANALYSIS
Matching with good accuracy the results of our modelling for the very early and early afterglow phases with the multi-band spectra and intensity evolution was our first goal. This task required a global fitting of the available datasets, mainly driven by the unique set of simultaneous spectral data in the GeV-TeV ranges as provided by AGILE and LHAASO.
Figures <ref> and <ref> show the entire set of available spectral data and the best theoretical modelling. <Ref> provides the parameters of the modelling for the relevant time intervals. The optimized set of parameters for our model, including their time evolution, was obtained by a global analysis of the very early and early phases lasting up to about t' = 10^4 s. We also extended our analysis to late times, as discussed below.
§.§ The very early afterglow phase
In the first afterglow phase, the earliest spectral data available for this GRB are given by the two LHAASO spectra within t' = [5:14] s and [14:22] s (no AGILE GeV data are available for these intervals because of exposure and saturation).
In Figures <ref> and <ref>, we show the spectral energy distributions (SEDs) of the model, which are in agreement with both the unabsorbed and the EBL-absorbed LHAASO data. Considering the model parameters adopted for these two intervals reported in <Ref>, it is interesting to note that a successful spectral modelling requires ε_B ≪ε_e and γ_min≪γ_c since the beginning of the afterglow.
Furthermore, in <Ref> we add the AGILE MCAL data between t' = [-15:-3] s, just before the main burst after which the MCAL instrument was saturated. Although they are not simultaneous with the GeV-TeV data, they confirm that the prompt MeV emission was significantly more intense than the afterglow synchrotron emission predicted by the model. Indeed, we interpret the MeV emission as an additional component related with the prompt phase of the GRB, and not to the afterglow. Spectra obtained a few seconds later by the Konus-Wind instrument in <cit.> between t' = [-1:7] s support this interpretation. As indicated by Fermi-GBM <cit.> and Konus-Wind data the hard X-ray/MeV emission progressively decreased with time allowing the underlying afterglow emission to emerge in this energy range after several hundreds of seconds.
§.§ The early afterglow phase: simultaneous GeV-TeV data
GeV and TeV gamma-ray spectral data during the early afterglow (T_1, T_2, and T_3 intervals) between t' = [20: 10^4] s turn out to be crucial for our analysis.
Figures <ref>-<ref> show such datasets and our optimized spectral modelling during this phase.
In our interpretation, the quantities ε_e and ε_B are required to change in time for a precise fitting between model and spectral data. In intervals T_1-T_3, the quantity ε_e slightly increases from 2.0· 10^-2 to 3.8 · 10^-2, and the quantity ε_B diminishes from 6· 10^-6 to
3· 10^-7. This time dependence reflects the evolution of the physical conditions determining the particle acceleration and magnetic field efficiencies. Keeping the quantities ε_e and ε_B constant throughout the early afterglow phase leads to a significant discrepancy between model and data. The model with parameters given in <Ref> provides a viable framework: the agreement between data and model is satisfactory despite the stringent spectral constraints. As we will see, this approach is successful also in the subsequent phases of the afterglow.
The GeV-TeV component is interpreted as inverse Compton emission. Very interestingly, during these GeV-TeV simultaneous observations lasting from about 20 to more than 600 s, the position of the inverse Compton peak is almost constant in time. This represents an important constraint for the modeling, as we will discuss below.
In the T_2 interval in <Ref>, we also show simultaneous AGILE MCAL spectral data. As mentioned earlier, we confirm that they belong to the decaying prompt component, and that no information on the synchrotron afterglow component in this energy range can be obtained at this time.
§.§ The early afterglow phase: simultaneous X-ray and GeV data
New important spectral data become available at t' ≥ 3000 s during the T_4 interval - shown in <Ref> - in the X-ray (in t' = [3174:4274] s) and hard X-ray (in t' = [3674:4301] s) ranges.
These data are the first set of observations of the Swift telescope, and are simultaneous with the AGILE observations. For this reason, they represent an important opportunity to constrain - for the first time in the GRB 221009A afterglow - the crucial synchrotron peak.
The Swift-XRT instrument reported a hard photon index of 1.61 ± 0.02, while the Swift-BAT instrument reported a softer photon index of 2.13 ± 0.19. This indicates a spectral break between 10 and 100 keV, which then constrains the peak of the synchrotron emission and consequently the model parameters and their evolution.
Furthermore, the X-ray and hard X-ray emissions can provide useful information on the relation between the overall synchrotron component vs the inverse Compton emission.
During the T_4 interval, no strictly simultaneous TeV data are available. However, the very last TeV point in the LHAASO lightcurve corresponds to the beginning of this time interval and substantially confirms our modelling at these times.
§ LIGHTCURVE MODELLING
<Ref> presents the results of our model with the computation of the GRB 221009A afterglow lightcurves in different energy intervals spanning over the optical, , GeV and TeV bands. The observed lightcurves, described in Section <ref>, are also shown.
Overall, a remarkable agreement between our fireball model and the data is verified over a very long time interval.
At early times t' < 10^4 s, both the GeV and TeV gamma-ray data evolution are well-matched by our model.
Notably, no jet breaks can be identified in either our model of the TeV emission (which continues with the same time evolution up to t' ∼ 10^4 s and then it decays) or in the AGILE GeV data (see Discussion). The early X-ray data are also well described by our model, which satisfactorily connects with the second set of observations by Swift.
The calculation of the spectral and lightcurve properties of the late afterglow proceeds being dictated by the adiabatic hydrodynamic evolution of the fireball. This point is particularly relevant because, once fixed the overall parameters of <Ref> and the time dependence of ε_e and ε_B as determined during the early phases, our model predicts the spectral and lightcurve evolution of the late afterglow with no free parameters.
In the late phase at t' > 10^4 s, GeV-TeV spectral data are not available. Only optical and X-ray data can be considered, which in our scenario reflect the evolution of the synchrotron component. The X-ray flux evolution is well matched by our model also in this phase, predicting the observed temporal flux index up to t' ∼ 10^5 s.
Interestingly, after that a slight continuous curvature appears in the calculated lightcurve due to the intervening spectral break of the synchrotron peak in the same energy band, which also supports the stability of the hard photon index reported by <cit.>. This interpretation excludes the presence of a flux steepening due to a jet break, and it is in agreement with the absence of a similar curvature in the late optical lightcurve.
Furthermore, it is interesting to notice that - in this late phase at about 10^4 < t' < 10^5 s - our model predicts a TeV flux compatible with the current and future gamma-ray observatories, which makes these long GRB events possibly detectable even after about 1 day from the initial triggers.
In the very late phase t' > 10^5 s, in our model the exponential spectral break of the synchrotron emission at γ_max continues
its transit through the X-ray band and predicts the softening of the integral flux intensity evolution. After t' = 10^6 s, the model starts to depart from the observed X-ray data in the latest phase of the GRB emission. The X-ray lightcurve at such late times may also be influenced by other processes, e.g., a late time energy injection increasing γ_max or a shallow spectral break rather than the exponential break applied here.
Indeed, this spectral effect affecting the lightcurve is not reported in the optical band, which is accurately predicted above t' > 10^5 s, in agreement with the data. However, when taking into consideration earlier data at t' < 10^5 s <cit.>, our model tends to overproduce such an optical emission. This situation is not uncommon in long GRB fireball modelling <cit.>. The early optical emission may be influenced by a reverse shock providing enhanced optical flux in the first phase, which then is softened and later modified by the transit of the characteristic frequencies <cit.> or by the contribution of an emerging supernova.
In <Ref> we show a comparison between model and data of the temporal index β assuming that the flux lightcurve behaves as ∝ t^-β. The model shows a satisfactory agreement with the observed data.
§ DISCUSSION
In this work, our theoretical model has been computed by keeping constant the largest number of parameters during the GRB afterglow evolution in order to deduce the physical constraints.
The ISM density profile has been kept constant and homogeneous to n_0 = 0.8 cm^-3 throughout all the phases of the GRB.
This assumption is supported also by other works and previous analyses of long GRBs <cit.>, but it may represent a simplification of the real configuration with an average density of the ISM during the early phases.
We also assume the presence of a cut-off at γ_max in the distribution of accelerated particles, which suppresses the maximum energy of the synchrotron emission, affecting both the SEDs and the lightcurve at specific times.
This physical feature can be directly investigated with the AGILE data presented in <cit.>.
We verified that at two specific time intervals t'= [47, 157] s and t'= [458, 608] s, the AGILE GRID data moderately constrain the value of γ_max≲ 4 · 10^7, which we adopt in our modelling as a constant parameter.
After these intervals, AGILE data do not constrain γ_max, which may increase in time.
In our analysis, the effect of internal γγ absorption is not detectable in the very early SEDs. Conversely, as shown in <Ref>, it marginally affects the early spectra at ∼5 TeV energies <cit.>.
This minor effect may be related to a physical detachment between the regions emitting high- and low-energy photon fields, possibly due to the specific spatial evolution of the outwardly propagating shock wave.
§.§ Shock efficiencies ε_e and ε_B
A crucial physical process in GRB afterglows is the conversion of the shock energy into non-thermal particle acceleration and magnetic field generation, characterized in this model by the two efficiencies ε_e and ε_B. These two parameters simplify the mechanisms at the core of the afterglow emission, which can be very complex. The case of GRB 221009A is ideal to study this phenomenon given the unusual burst duration and detailed spectral data available. Interestingly, we find that ε_e and ε_B cannot be constant in order to achieve a global fit of the experimental data.
In our model, we adopt the following temporal power-law evolution for the entire GRB 221009A afterglow:
ε_e ∝ t^0.19 ± 0.02 and ε_B ∝ t^-0.84 ± 0.04 ,
that constitutes one of the major results of our analysis.
Our data fitting indicates that ε_e slightly increases from 1.7% to 5% between a few seconds and t' ≃ 5 · 10^3 s. Significantly higher values of ε_e would shift the overall broad-band emission to high energies, providing a disagreement with the data.
The quantity ε_B evolves considerably over the same time intervals, decreasing from about 10^-5 to 10^-7, which is a range of values in agreement with the literature <cit.>.
It is also interesting to notice that ε_B ≪ε_e throughout the GRB afterglow evolution, which is also found in many GRBs with gamma-ray afterglows <cit.>.
These time-dependent shock efficiencies successfully describe the spectral and flux intensity data of GRB 221009A up to late times, confirming an approach that has been explored in the past also for other long GRBs <cit.>. However, their power-law time evolution may change at very late times with an increasing influence of other hydrodynamic and geometrical effects.
§.§ Cooling and evolution of the spectral peaks
Our best modelling of GRB 221009A is based on the slow-cooling regime with γ_min < γ_c. We find that the predictions of the fast cooling are in contradiction with the afterglow data since the early times.
Additionally, it is interesting to note that during the early phases and throughout the afterglow, the synchrotron and inverse Compton peaks remain quite constant in time. We deduce this important feature from the GeV-TeV spectral data in intervals T_1-T_3, and subsequently from the X-ray and GeV data in interval T_4.
Focusing on the synchrotron cooling frequency ν_c, we notice that the prediction for constant ε_e and ε_B implies ν_c ∼ t^-1/2, which is in contradiction with observations up to t' ∼ 10^4 s. It is interesting to note that making ν_c weakly dependent on time is equivalent in our model to obtain that the combination
ν_c^-1∝Γ B^3 t^2 ∝Γ^4 ε_B^3/2 t^2
remains nearly constant.
Given the known dependencies of Γ(t) ∝ t^-3/8 in the adiabatic slow-cooling scenario, either a more dissipative hydrodynamics Γ(t) or a time evolution for ε_B are required in order to preserve the quasi-constancy of the synchrotron and the inverse-Compton peak frequencies in the first times.
In our model, we have adopted a variable ε_B according to Eq.(<ref>).
§.§ Energetics and jet breaks
Our initial isotropic-equivalent energy of the blast wave E_iso,0∼ 7· 10^55 erg/s is quite large,
even though similar values have been reported by other authors <cit.>.
The beaming-corrected isotropic-equivalent energy of the jet E_k is given by
E_k≃θ^2/2 E_iso,0,
where the opening angle θ is often estimated by identifying the presence of
achromatic jet breaks in the afterglow lightcurve <cit.>.
A first possible determination of a jet break has been proposed by <cit.>, reporting a steepening of the TeV gamma-ray lightcurve around t' ≃ 670 s.
The deduced opening angle, θ∼ 0.6^∘, would be rather low compared to
other GRBs <cit.>.
In this case, the beaming-corrected shock energy of GRB 221009A, E_k∼ 4· 10^51 erg, would be in agreement with the statistical distribution of long GRBs.
However, the GeV data presented in this paper do not show any significant indications of a jet break up to t' ∼ 10^4 s. Given the strong connection between GeV and TeV data, we deduce that a high-energy jet break at early times is unlikely.
Our model is also consistent with the data and does not predict an early jet break. As seen in Section <ref>, once the overall parameters in <Ref> and the time dependence of ε_e and ε_B are fixed, the spectral and light curve evolution are determined solely by the adiabatic hydrodynamic evolution of the fireball, even at very late times, within a consistent and unified scenario.
An alternative interpretation reported by <cit.> suggests that the jet break may occur in the late afterglow phase at about 10^5 s (∼ 1 day), when a steepening of the X-ray and optical fluxes occurs.
This implies a relatively large opening angle, θ > 15^∘, which would be larger than the common values in the literature <cit.>. Consequently, the corrected isotropic-equivalent energy would be reduced[This interpretation is complicated by an actually non-simultaneity of the jet break between optical and X-ray data, as indicated in <cit.>.]
to E_k∼ 2· 10^54 erg.
Another interpretation by <cit.> interprets the X-ray and optical bending at ∼1 day as a geometrical effect due to the shallow energy profile of a structured jet. Such a scenario would reduce the model energy requirements, describing the late X-ray and optical afterglow after ∼0.8 days. The presence of a shallow structured jet may also provide modifications to the temporal dependence of the critical frequency <cit.>. However, we do not discuss these effects in the current model as it would require further exploration of parameter correlations.
Given the nature of our precisely modelled data during the early phases of the GRB 221009A afterglow, we conclude that a canonical early jet break is supported neither by the data nor by this theoretical interpretation. A jet break might occur in the later phases or might not be canonically identified, as observed in other recent major events such as GRB 130427A <cit.>, GRB 190829A <cit.>, and GRB 190114C <cit.>.
§ CONCLUSIONS
The theoretical modelling of GRB 221009A requires an extraordinary approach. We modelled the early and late phases of the GRB 221009A afterglow within the context of the relativistic blast wave plus the synchrotron self-Compton scenario. The high-quality observational data, spanning from optical to X-rays and up to GeV-TeV gamma-ray energies, provide crucial constraints for the spectral and flux modeling of this very long GRB.
It turns out that a simultaneous fitting of all the spectral and flux data is not trivial and it is not achieved by any standard GRB model.
A physical model has to be confronted with a number of problematic features of the GRB 221009A afterglow, including: (a) the very precise spectral information first in the GeV-TeV range and later in the optical and X-ray bands that establishes the quasi-constancy of ν_c (rather than the standard behavior implying ν_c ∼ t^-1/2); (b) the overproduction of the GeV component in case of constant or relatively large ε_B; (c) the coherence between the X-ray, GeV and TeV lightcurves since the very early phases up to the late phases, indicating a highly constrained physical system evolving in a global way; (d) the bending of the X-ray lightcurve near t' ∼ 10^5 s without invoking a jet break; (e) overall, the challenge of providing a consistent model explaining a large number of features at very different energies and timescales.
We provide such a model, investigating a comprehensive theoretical interpretation within the framework of a relativistic expanding fireball.
Here we emphasize its interesting properties: (1) large values of the initial isotropic energy E_iso,0 and of the initial bulk Lorentz factor Γ_0; (2) a homogeneous density profile with n(r) ≡ n_0 = 0.8 cm^-3; (3) the electron power-law index p being constant; (4) a regime of adiabatic slow cooling throughout the entire afterglow; (5) a shock energy being progressively transferred to accelerated electrons with increasing efficiency (reflecting a very fundamental property of the physics of afterglow particle acceleration in this GRB); (6) a relatively small value of ε_B varying from 10^-5 to 10^-7; (7) the relevance of a maximum energy of the electron distribution γ_max that is constrained by GeV-TeV data during the early phases of the afterglow.
We also notice that the overall MeV-GeV-TeV datasets show the transition from a prompt-dominated phase to an afterglow-dominated phase up to t' ∼ 600 s, with a rare clarity
compared to other GRBs. Until this time, the activity of the inner engine contributing to the MeV emission overwhelms the radiation emitted by the optically-thin region of the afterglow in the X-ray energy band. On the other hand, in the very early afterglow phase the physical high-energy component, being probably attenuated by internal γγ absorption with the prompt photon fields, it is then released and produce gamma rays as the opacity decreases.
This is supported by the smoothness of the very early GeV-TeV lightcurve compared to simultaneous X-ray and MeV lightcurves, suggesting a different emission origin for these energy bands, the former representing an afterglow emission and the latter the turbulent activity of the inner engine leading to the prompt emission.
A key outcome of our analysis of the exceptional GRB 221009A is that, while the shock becomes progressively more efficient at energizing the non-thermal population of radiating particles, the magnetic field energy density significantly decreases over time. This outcome in energy transfer and magnetic field generation may also be influenced by hydrodynamical effects acting on Γ(r), beyond those considered in the model.
The time evolution of the microphysical parameters, supported by a precise fitting between model and multi-frequency spectral and intensity data, is an important indication for future theoretical investigations of GRB 221009A and of other long GRBs.
Acknowledgments
AGILE is a mission of the Italian Space Agency (ASI), with scientific and programmatic participation of INAF (Istituto Nazionale di Astrofisica) and INFN (Istituto Nazionale di Fisica Nucleare).
This work was partially supported by the grant Addendum n.7 - Accordo ASI-INAF n. I/028/12/0 for the AGILE project. We are grateful to Marco Romani for his contributions on the subject of this paper that he presented in his Master's thesis. We thank an anonymous referee for stimulating comments on the manuscript.
§ DETAILS ON AGILE DATA
§ CHARACTERISTIC FREQUENCIES OF THE MODEL
*
aasjournal
|
http://arxiv.org/abs/2409.03737v1 | 20240905175057 | Reprogrammable sequencing for physically intelligent under-actuated robots | [
"Leon M. Kamp",
"Mohamed Zanaty",
"Ahmad Zareei",
"Benjamin Gorissen",
"Robert J. Wood",
"Katia Bertoldi"
] | cs.RO | [
"cs.RO",
"cond-mat.other"
] |
APS/123-QED
1J.A.Paulson School of Engineering and Applied Sciences, Harvard University, USA.
2Department of Mechanical Engineering, KULeuven and Flanders Make, Belgium.
§ ABSTRACT
Programming physical intelligence into mechanisms holds great promise for machines that can accomplish tasks such as navigation of unstructured environments while utilizing a minimal amount of computational resources and electronic components. In this study, we introduce a novel design approach for physically intelligent under-actuated mechanisms capable of autonomously adjusting their motion in response to environmental interactions. Specifically, multistability is harnessed to sequence the motion of different degrees of freedom in a programmed order. A key aspect of this approach is that these sequences can be passively reprogrammed through mechanical stimuli that arise from interactions with the environment. To showcase our approach, we construct a four degree of freedom robot capable of autonomously navigating mazes and moving away from obstacles. Remarkably, this robot operates without relying on traditional computational architectures and utilizes only a single linear actuator.
Reprogrammable sequencing for physically intelligent under-actuated robots
Leon M. Kamp1, Mohamed Zanaty1, Ahmad Zareei1, Benjamin Gorissen2, Robert J. Wood1, Katia Bertoldi1
5th September 2024
========================================================================================================
§ INTRODUCTION
Autonomous interactions with unstructured environments pose significant challenges for robots, often necessitating complex perception and controls systems, and a multitude of sensors and actuators. However, there have been investigations in recent years that demonstrated how the incorporation of “physical intelligence” directly into the body of a robot can result in autonomous responses to environmental cues while utilizing fewer sensors, actuators, and controllers <cit.>. It has been shown that under-actuated mechanisms provide a promising framework to realize physically intelligent systems that can use additional degrees of freedom to adapt to changing boundary conditions. For example, under-actuated mechanisms with specific regions of tuned compliance have resulted in cockroach-inspired robots capable of robust locomotion through uneven terrains <cit.> and robotic grippers that can efficiently grasp objects of different sizes, shapes, and stiffness levels using a single actuator <cit.>. Furthermore, programmable compliance in under-actuated mechanism has been shown to simplify control in limbless robots <cit.>. However, this passive compliance does not allow for the control necessary for coordinated actuation between joints. For example, although studies have demonstrated that underactuated mechanisms with selective distributed compliance can yield self-stabilizing and energy-efficient walking gaits, these mechanisms still depend on two actuators to synchronize swing and lift degrees of freedom <cit.>. A few examples have been reported of robots capable of various gaits with just one actuator, <cit.>, but they still depend on traditional computational frameworks for sensing and control.
Recently, there have been advances in harnessing multistability to transition between states with distinct programmed behaviors <cit.>. This opened new avenues for locomotion without traditional controllers.
For instance, the snap through instability of a curved strip has been utilized to create self-rolling robots capable of autonomously navigating mazes <cit.>, origami-inspired multiplexed switches have been implemented to realize an untethered crawler that avoids obstacles <cit.>, and bistable mechanical valves have played a crucial role in the development of pneumatic circuits that control the locomotion of soft-legged robots <cit.>. Extending beyond locomotion, arrays of multiple bistable units have facilitated the encoding of mechanical logic and metamaterials with reprogrammable properties <cit.>. Furthermore, by manipulating the energy landscapes of individual degrees of freedom, predetermined transitions between states have been demonstrated <cit.>. The theoretical framework describing these transitions has also been established <cit.>, laying the groundwork for designing multistable corrugated sheets capable of navigating intricate transition pathways <cit.>. Crucially, it has also been shown that these pathways can be finely adjusted by modulating the systems boundary conditions <cit.>. Nonetheless, while this presents an exciting opportunity for creating robotic systems responsive to environmental inputs, concrete functional applications of this capability have yet to be demonstrated.
In this work, we propose a novel design strategy for physically intelligent under-actuated mechanisms with reprogrammable behaviors and harness this to realize robots capable of adapting their gait in response to mechanical interactions with the environment. More specifically, we exploit geometric nonlinearities combined with elasticity to create reprogrammable mechanisms with multi-welled energy landscapes that yield a variety of minimum energy pathways with a single actuator input. Furthermore, we demonstrate the tunability of the energy landscape using mechanical inputs to autonomously reprogramreprogram the motion sequence of the mechanism. As a practical demonstration of this framework, we create a four degree of freedom robot capable of navigating mazes and avoiding obstacles—all without the need for computational intelligence and utilizing only a single actuator.
§ RESULTS
Characterization of the unit cell.
Our unit cell with one degree of freedom, consists of a parallelogram four-bar mechanism comprising two rectangular blocks connected by a pair of identical levers of length r and a spring (rubber band). As shown in Fig. <ref>A, the two ends of the rubber band are anchored at points defined by the vectors 𝐩 and 𝐪, which originate from the midpoints between the joints of the levers. The range of rotation of the levers is constrained by the contact between the blocks on the interval [-θ_c, θ_c]. We define the configurations where θ=-θ_c and θ_c as state 0 and state 1, respectively (with θ representing the angle between the levers and the horizontal direction). Modeling the rubber band as a linear spring with stiffness k and rest length ℓ_0, the energy landscape of the unit cell is given by
ℰ(θ,𝐩,𝐪) = k/2[ℓ(θ,𝐩,𝐪) - ℓ_0]^2,
with
ℓ(θ,𝐩,𝐪)=√((rcosθ + q_x -p_x)^2 + (r sinθ + q_y - p_y)^2),
where (p_x, p_y) and (q_x, q_y) denote the x and y components of the 𝐩 and 𝐪 vectors.
To explore how the location of the anchor points of the rubber band affects the mechanical response, we consider six unit cell designs with varying 𝐪, but constant θ_c=π/4, r=√(2) cm, p_x= -1 cm, p_y= 0 cm, k = 28.5 N/m and ℓ_0= 10 mm (Fig. <ref>C). All units are placed in
state 0 and moved to state 1 by applying an upward displacement, u, with magnitude u_max = 2r sinθ_c to their outer (right) block. In Fig. <ref>D and <ref>E we report the evolution of the vertical reaction force on the outer block, F, and the difference in energy between the current configuration and state 0, Δℰ = ℰ(θ) - ℰ(-θ_c), as a function of θ. Three key features emerge from these plots. Firstly, for all units the reaction force monotonically decreases as a function of θ, leading to a concave down energy landscape. Secondly, the location of the anchoring points for the rubber band strongly affect the energy costs for switching from state 0 to state 1. This influence opens up possibilities for creating reprogrammable sequences within arrays of coupled units.
Thirdly, all of the units, except for 𝐪 = [0,-15] mm (purple), are bistable. These cases display two energy minima at θ=±θ_c separated by an energy
barrier. Note that, since ℰ is characterized by a local maximum at
θ^max=arctan(p_y-q_y/p_x-q_x),
any units with |θ^max|< θ_c will display bistability and be characterized by two local energy minima located at state 0 and state 1. Conversely, when 𝐩 and 𝐪 are selected so that |θ^max|> θ_c, we expect the units to have only one stable configuration (either at θ=-θ_c or at θ=θ_c).
To validate these predictions, we built a prototype comprising laser cut acrylic blocks and levers connected by pin joints with ball bearings. In our experiments we clamp the left block and pull the right from state 0 to state 1, while recording the reaction force using a Instron 5969 equipped with a 500 N load cell. In Fig. <ref>D and E we compare the experimentally-measured forces and elastic energy (obtained by numerically integrating the measured reaction force) to numerical predictions for the six unit cell designs, with agreement sufficient to validate our simple analytical model.
Serial coupling of two unit cells. Next, with the goal to encode deformation sequences, we turn our attention to a mechanism comprised of two unit cells connected in series: an inner unit on the left and an outer unit on the right (Fig. <ref>A). The state of this system is characterized by the unit states (α^inα^out) ∈{0,1} of the inner and outer unit, respectively. Starting at state (00), we apply an upward vertical displacement of magnitude 2 u_max to the outermost block that causes a transition to state (11) and then return to state (00) by applying a downward displacement of identical magnitude. The sequence of transitions connecting states (00) and (11) can be deliberately controlled by manipulating the energy landscape of the two units. By carefully selecting the anchoring points of the rubber bands, defined by the vectors (𝐩^in, 𝐪^in) and (𝐩^out, 𝐪^out), we have the capability to engineer directed pathways that cyclically traverse all four possible states under a linear input (blue and red arrows in Fig. <ref>B). Additionally, we can create undirected pathways that access the states in the same order when applying upward and downward displacement (green and orange arrows in Fig. <ref>B). Note that for the considered mechanism, it is impossible to switch two states simultaneously (e.g., transitioning from state (01) to state (10)).
To determine the pathway followed by the system, we first define its
elastic energy, which is given by the sum of the elastic energy of the individual units
ℰ_tot=ℰ(θ^in, 𝐩^in, 𝐪^in)+ℰ(θ^out, 𝐩^out, 𝐪^out),
where θ^in and θ^out represent the angles between the horizontal direction and the levers of the inner and outer unit, respectively. When the displacement of the outer unit is controlled, the response of the system is characterized by one independent variable and the constraint
u=r(sinθ^in+sinθ^out-2 sinθ_c).
To identify the path followed by the structure, we incrementally increase u starting from the initial configuration (defined by θ^in=θ^out=-θ_c) and locally minimize the elastic energy, ℰ_tot, using a quasi-Newton method (see Supporting Information for details).
In Fig. <ref>C we focus on a mechanism comprised of two unit cells identical to those considered in Fig. <ref>. We choose 𝐩^in=-𝐪^in=𝐩^out=[-10, 0] mm, and systematically investigate the effect of 𝐪^out on the sequence of transitions. We find that large values of q_y^out tend to promote undirected transition sequences (orange and green shaded areas in Fig. <ref>C), whereas large values of q_x^out tend to lead to directed transition sequences (red and blue shaded areas in Fig. <ref>C). In Figs. <ref>D-<ref>G we examine four configurations, with 𝐪^out=[0,0], [20,0], [10,-15], [20,15] mm, that display the four supported sequences of transitions (corresponding to the diamond markers in Fig. <ref>C). For each of them we report the energy landscape as a function of θ^in and θ^out, alongside the measured vertical reaction force at the outer unit as a function of the applied displacement. For all configurations, we observe that the minimum energy path results in sequential motions, where the units transition between state 0 and state 1 one after the other, rather than simultaneously. This is due to the convex down nature of the energy landscape of the individual units, resulting in energy minima predominantly along the boundary. For this reason, the order of these sequential transitions can be determined by the reaction force of the two individual units at state 0, F_0=F(-θ_c), and at state 1, F_1=F(θ_c). More specifically, undirected pathways that traverse through state (01) are realized when F_0^in>F_0^out and F_1^in>F_1^out, while those transitioning through state (10) require F_0^in<F_0^out and F_1^in<F_1^out. Conversely, clock-wise directed sequences necessitate that F_0^in<F_0^out and F_1^in>F_1^out, while the counterclockwise sequences require F_0^in>F_0^out and F_1^in<F_1^out. We also note that for all the configurations considered here supporting undirected sequences, the system remains exclusively along the boundaries of the energy landscape. In contrast, a snapping instability is triggered for all configurations supporting directed sequences, when approaching states (11) and (00), (see Supporting Information for details).
Finally, it is noteworthy that the combined energy landscape of all the two-unit mechanisms examined here reveals a multi-welled landscape, even in situations where the outer units are monostable (represented by lightly shaded regions in Figure <ref>C).
Physically intelligent under-actuated robot.
Next, we demonstrate how to encode various transition sequences in an under-actuated three-legged robot to realize multiple modes of locomotion. This robot controls four degrees of freedom with a single actuator and can be physically reprogrammed to change gait. As shown in Fig. <ref>A, the robot is comprised of a pair of the two-unit mechanisms described in Fig. <ref>, two V-shaped leg mechanisms, and a body with a stepper motor, battery, controller and a large T-shaped foot for stability. The innermost block of the two-unit mechanism is anchored to the rigid body, while the stepper motor drives the outermost block back and forth, simultaneously driving the two mechanisms back and forth between their (00) and (11) states. We connect the V-shaped leg mechanism to both the middle and outermost blocks of the two-unit mechanisms. As a result, shifting the outer unit from state 0 to state 1 increases the distance between the connection points of the leg, raising the foot. Conversely, a transition from state 1 to state 0 lowers the foot. Moving the inner unit from state 0 to state 1 propels the entire mechanism forward, while transitioning from state 1 to state 0 pushes the mechanism backward (Fig. <ref>C). Therefore, the sequence of transitions (00) → (10) → (11) → (01) →(00) leads to a forward step, whereas (00) → (01) → (11) → (10) →(00) results in a backward step (Fig. <ref>C).
We carefully select the anchor points of the rubber bands to ensure the motion of the leg follows the desired sequence of transitions. It is important to recognize that the robot's response is also influenced by external factors, unlike a standalone mechanism where the energy landscape is solely determined by the springs (i.e., their stiffness, initial length, and anchoring points). Specifically, we need to account for the effects of friction on the inner unit and gravity on the outer unit (see Supporting Information for details). In our robot, we observe that the contribution of these forces is significant enough to affect the transition map of Fig.<ref>C. For instance, while a standalone mechanism with anchor points defined by 𝐩^in=[-10, 0] mm, 𝐪^in=[10, 0] mm, and 𝐩^out=[-10, 0] mm achieves a sequence leading to a forward step for 𝐪^out=[15, 0] mm (as indicated by the blue region in Fig.<ref>C), for the robot, this is only feasible when 𝐪^out is adjusted to [20, 0] mm (see Video S3 and Supporting Information for details). Conversely, the sequence leading to a backward step is less affected by friction as contact with the ground is only made in the second part of the sequence (i.e., (10) → (00)). Therefore, a backward step can be realized by selecting 𝐪_out=[5, 0] mm (see Video
S3 and Fig. S13B).
In Fig. <ref>D, we show the trajectory followed by the robot for different choices of 𝐪^out. As expected, we see that the robot moves forward for 𝐪^out=[20, 0] mm (left), and moves backward for 𝐪^out=[5, 0] mm (center). Further, a turning motion is achieved by programming one leg to execute a forward step while the other performs a backward step.
The difference between a forward, backward, and turning motion is also captured in the state of the mechanism. As shown in Fig. <ref>C, the leg with 𝐪^out=[20, 0] mm is in state (01) after completing 73/4 input cycles, whereas the one with 𝐪^out=[5, 0] mm is in state (10).
Moreover, in Figs. <ref>E-F, we present the evolution of the displacement, d, and rotation, ϕ, of the robot for the three considered configurations of the rubber bands. We observe that for the robot programmed to move backward, d/u_max = 0.96, indicating that the displacement provided by the linear actuator is almost entirely converted into movement of the robot. However, for the forward motion, d/u_max = 0.71. We attribute this reduction to the snapping of the mechanism as it approaches the (11) state, exerting a force that pushes the robot in the opposite direction. Lastly, we find that, for turning motion, d is negligible and ϕ≈π/12 rad per cycle.
Gait adaptation in response
to mechanical interactions with the environment.
In Fig. <ref> we show that a small linear displacement of an anchor point can completely reverse the gait of a leg. We harness this observation to realize a robot capable of adapting its gait in response
to mechanical interactions with the environment. For this, we attach a module with two antennas at the front of the robot (Fig. <ref>A). Each antenna consists of two levers that connect to the inner rubber band of the two-unit mechanism on the opposite side of the robot (Fig. <ref>B). This makes it possible to change 𝐩^in when an antenna comes into contact with an obstacle. More specifically, the movement of the left (right) lever alters 𝐩^in of the right (left) mechanism. To avoid an obstacle, we tune the rubber bands so that this change in 𝐩^in is enough to make the robot to turn away from the obstacle. To allow the levers to move with a low contact force and therefore increase their sensitivity, we use rubber bands with rest length ℓ_0 = 22 mm and stiffness k= 47 N/m. Further, we choose 𝐪^in = [10,0] cm, 𝐩^out = [-10,0] cm, and 𝐪^out = [20,0] mm and find that that the robot moves forward in the absence of contact if the antennas are tuned to hold the anchoring point of the inner rubber bands at a position defined by 𝐩^in = [-7,0] mm (Fig. <ref>C). When the right antenna comes in contact with an obstacle (here, a wooden cylinder with a diameter of 50 mm), it pushes 𝐩^in of the left mechanism to ≈[-11,0] mm . This initiates a backwards left turning motion until the antennas lose contact with the object. Subsequently, 𝐩^in springs back to [-7,0] mm, and the robot resumes its forward motion. The robot's ability to move away from mechanically sensed obstacles is observable for a flat wall and cylindrical column across a wide range of approaching angles and distances from the center of the robot (see Video S4). As a demonstration of the robustness of the observed behavior, in Fig. <ref>D we depict the trajectory of the robot navigating through an environment obstructed by three walls. The robot is able to successfully traverse this environment and reach the exit without electronic sensing and using a single actuator (Video S4).
Conclusions.
To summarize, we have shown that a multistable energy landscape enables us to create forward and backward locomotion gaits with a single quasi-static linear input. The tunable nature of the energy landscape at each degree of freedom makes it possible to realize a robot capable of adapting its gait
in response to mechanical interactions with the environment without the need for electronic feedback and control. Though our focus in this study has been on mechanisms with two degrees of freedom, our approach can readily extend to systems with a greater number of degrees of freedom, thereby expanding the range of possible states and transitions. This consequently allows for more complex control strategies and mechanical computing <cit.>. Furthermore, while our study has centered on a platform consisting of rigid blocks connected by levers and elastic springs, multistable energy landscapes can also be realized using beams and shells, which allow for monolithic fabrication <cit.>. This opens up avenues for potential integration into robotic systems with size, weight or material constraints for on-board control, such as soft robots and microrobots <cit.>. Finally, given the potential for triggering snapping instabilities in multistable mechanisms, there is an opportunity for leveraging this phenomenon for rapid movements such as jumping over obstacles <cit.>.
All together, our platform provides a foundation for designing physically intelligent autonomous robots that operate with minimal reliance on electronic controllers, sensors, and actuators.
§ MATERIALS AND METHODS
Details of the design, materials, fabrication, testing methods and mathematical model of the unit cell and the two-unit mechanism are summarized in Section S1 of Supporting Information. The design, fabrication and analysis of the robot is discussed in Section S2 of Supporting Information.
§ ACKNOWLEDGEMENTS
The authors gratefully acknowledge support from the National Science Foundation through the Harvard MRSEC (DMR-2011754) and the ARO MURI program (W911NF-22-1-0219). The authors thank Connor McCann, Michelle Yuen, Colter Decker, Adel Djellouli, Anne Meeussen, Davood Farhadi and Giovanni Bordiga for helpful discussions.
§ DATA AVAILABILITY STATEMENT
The source code developed for the analytical model and post-processing of experimental data is available on GitHub at https://github.com/kampleon/Sequence_robot/. All the processed is attached as Supporting Information. Raw data is available on the GitHub repository.
§ COMPETING INTERESTS
The
authors declare that they have no competing interests.
41
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Sitti(2021)]Sitti2021
author author M. Sitti, title title Physical intelligence as a
new paradigm, https://doi.org/10.1016/J.EML.2021.101340 journal journal Extreme Mechanics Letters volume 46, pages 101340 (year
2021)NoStop
[Drotman et al.(2021a)Drotman, Jadhav,
Sharp, Chan, and Tolley]drotman2021electronics
author author D. Drotman, author S. Jadhav,
author D. Sharp, author C. Chan, and author
M. T. Tolley, title
title Electronics-free pneumatic circuits for controlling
soft-legged robots, @noop journal journal Science Robotics volume 6, pages eaay2627 (year 2021a)NoStop
[Clark et al.(2001)Clark,
Cham, Bailey, Froehlich,
Nahata, Full, and Cutkosky]Clark2001
author author J. E. Clark, author J. G. Cham,
author S. A. Bailey, author E. M. Froehlich, author
P. K. Nahata, author
R. J. Full, and author
M. R. Cutkosky, title
title Biomimetic design and fabrication of a hexapedal running
robot, https://doi.org/10.1109/ROBOT.2001.933183 journal journal Proceedings - IEEE International Conference
on Robotics and Automation volume 4, pages 3643 (year 2001)NoStop
[Dollar and Howe(2010)]Dollar2010
author author A. M. Dollar and author R. D. Howe, title title The highly adaptive sdm
hand: Design and performance evaluation, https://doi.org/10.1177/0278364909360852 journal journal
http://dx.doi.org.ezp-prod1.hul.harvard.edu/10.1177/0278364909360852 volume 29, pages 585 (year
2010)NoStop
[Odhner et al.(2014)Odhner,
Jentoft, Claffee, Corson,
Tenzer, Ma, Buehler,
Kohout, Howe, and Dollar]Odhner2014
author author L. U. Odhner, author L. P. Jentoft,
author M. R. Claffee, author N. Corson, author
Y. Tenzer, author R. R. Ma, author M. Buehler, author R. Kohout, author R. D. Howe, and author A. M. Dollar, title title A compliant,
underactuated hand for robust manipulation, journal journal International Journal of Robotics Research volume 33, https://doi.org/10.1177/0278364913514466
10.1177/0278364913514466 (year 2014)NoStop
[Catalano et al.(2014)Catalano, Grioli, Farnioli, Serio, Piazza, and Bicchi]Catalano2014
author author M. G. Catalano, author G. Grioli,
author E. Farnioli, author A. Serio, author
C. Piazza, and author
A. Bicchi, title title Adaptive synergies for the design and control of the pisa/iit
softhand, journal journal International Journal
of Robotics Research volume 33, https://doi.org/10.1177/0278364913518998 10.1177/0278364913518998
(year 2014)NoStop
[Wang et al.(2023)Wang,
Pierce, Kojouharov, Chong,
Diaz, Lu, and Goldman]Wang2023
author author T. Wang, author C. Pierce,
author V. Kojouharov, author B. Chong, author
K. Diaz, author H. Lu, and author D. I. Goldman, title title Mechanical
intelligence simplifies control in terrestrial limbless locomotion, https://www.science.org/doi/10.1126/scirobotics.adi2243 journal journal Science Robotics volume 8 (year 2023)NoStop
[Badri-Spröwitz et al.(2022)Badri-Spröwitz, Aghamaleki Sarvestani, Sitti, and Daley]badri2022birdbot
author author A. Badri-Spröwitz, author A. Aghamaleki Sarvestani, author M. Sitti, and author M. A. Daley, title title Birdbot achieves
energy-efficient gait with minimal control using avian-inspired leg
clutching, @noop journal journal
Science Robotics volume 7, pages
eabg4055 (year 2022)NoStop
[Iida and Pfeifer(2004)]Iida2004
author author F. Iida and author R. Pfeifer, title title Cheap rapid locomotion of a quadruped
robot: Self-stabilization of bounding gait, @noop journal journal Proceedings of the 8th International
Conference on Intelligent Autonomous Systems (IAS-8) (year
2004)NoStop
[Zarrouk and Fearing(2014)]Zarrouk2014
author author D. Zarrouk and author R. S. Fearing, title title 1star, a one-actuator
steerable robot (year 2014)NoStop
[Hariri et al.(2019)Hariri,
Soh, Foong, and Wood]Hariri2019
author author H. H. Hariri, author G. S. Soh,
author S. Foong, and author K. L. Wood, title
title A highly manoeuvrable and untethered under-actuated legged
piezoelectric miniature robot (year 2019)NoStop
[Noji et al.(2022)Noji,
Nansai, Kamamichi, and Itoh]Noji2022
author author S. Noji, author S. Nansai,
author N. Kamamichi, and author H. Itoh, title title Modeling and control of a lizard-inspired
single-actuated robot, journal journal IEEE
Robotics and Automation Letters volume 7, https://doi.org/10.1109/LRA.2022.3171919 10.1109/LRA.2022.3171919
(year 2022)NoStop
[Feshbach et al.(2023)Feshbach, Wu, Vasireddy, Beardell, To, Baryshnikov, and Sung]Feshbach2023
author author D. Feshbach, author X. Wu,
author S. Vasireddy, author L. Beardell, author
B. To, author Y. Baryshnikov, and author C. Sung, title title Curvequad:
A centimeter-scale origami quadruped that leverages curved creases to
self-fold and crawl with one motor (year 2023)NoStop
[Cao et al.(2021)Cao,
Derakhshani, Fang, Huang, and Cao]Cao2021
author author Y. Cao, author M. Derakhshani,
author Y. Fang, author
G. Huang, and author
C. Cao, https://doi.org/10.1002/adfm.202106231 title Bistable
structures for advanced functional systems (year
2021)NoStop
[Xu et al.(2023)Xu,
Chen, Sun, He, Li, Lu, and Chen]Xu2023
author author R. Xu, author C. Chen, author J. Sun, author
Y. He, author X. Li, author M. H. Lu, and author Y. Chen, title title The design, manufacture and
application of multistable mechanical metamaterials-a state-of-the-art
review, journal journal International Journal
of Extreme Manufacturing volume 5, https://doi.org/10.1088/2631-7990/acf96a 10.1088/2631-7990/acf96a
(year 2023)NoStop
[Osorio et al.(2022)Osorio,
Morgan, and Arrieta]Osorio2022
author author J. C. Osorio, author H. Morgan, and author A. F. Arrieta, title title Programmable multistable soft
grippers (year 2022)NoStop
[Jin et al.(2020)Jin,
Khajehtourian, Mueller, Rafsanjani, Tournat, Bertoldi, and Kochmann]jin2020
author author L. Jin, author R. Khajehtourian,
author J. Mueller, author A. Rafsanjani, author
V. Tournat, author K. Bertoldi, and author D. M. Kochmann, title title
Guided transition waves in multistable mechanical metamaterials, journal journal Proceedings of the National
Academy of Sciences of the United States of America volume 117, https://doi.org/10.1073/pnas.1913228117
10.1073/pnas.1913228117 (year 2020)NoStop
[Shan et al.(2015)Shan,
Kang, Raney, Wang,
Fang, Candido, Lewis, and Bertoldi]shan2015
author author S. Shan, author S. H. Kang,
author J. R. Raney, author P. Wang, author
L. Fang, author F. Candido, author J. A. Lewis, and author K. Bertoldi, title title
Multistable architected materials for trapping elastic strain energy, @noop journal journal Advanced
Materials volume 27, pages 4296
(year 2015)NoStop
[Hanna et al.(2014)Hanna,
Lund, Lang, Magleby, and Howell]Hanna2014
author author B. H. Hanna, author J. M. Lund,
author R. J. Lang, author S. P. Magleby, and author L. L. Howell, title
title Waterbomb base: A symmetric single-vertex bistable origami
mechanism, journal journal Smart Materials and
Structures volume 23, https://doi.org/10.1088/0964-1726/23/9/094009
10.1088/0964-1726/23/9/094009 (year 2014)NoStop
[Melancon et al.(2021)Melancon, Gorissen, J. García-Moraa, ,
Hoberman, and Bertoldi]melancon2021multistable
author author D. Melancon, author B. Gorissen,
author C. J. García-Moraa, ,
author C. Hoberman, and author K. Bertoldi, title title Multistable inflatable origami structures at the
meter-scale, @noop journal journal
Nautre - In press (year 2021)NoStop
[Zhao et al.(2022)Zhao,
Chi, Hong, Li, Yang, and Yin]Zhao2022
author author Y. Zhao, author Y. Chi, author Y. Hong, author
Y. Li, author S. Yang, and author J. Yin, title title Twisting
for soft intelligent autonomous robot in unstructured environments, https://doi.org/10.1073/PNAS.2200265119/SUPPL_FILE/PNAS.2200265119.SM09.MP4
journal journal Proceedings of the National
Academy of Sciences of the United States of America volume 119, pages e2200265119 (year
2022)NoStop
[Yan et al.(2023)Yan,
Li, Deguchi, Zheng,
Rus, and Mehta]Yan2023
author author W. Yan, author S. Li, author M. Deguchi, author
Z. Zheng, author D. Rus, and author A. Mehta, title title
Origami-based integration of robots that sense, decide, and respond, https://doi.org/10.1038/s41467-023-37158-9 journal
journal Nature Communications 2023 14:1 volume 14, pages 1 (year 2023)NoStop
[Drotman et al.(2021b)Drotman, Jadhav,
Sharp, Chan, and Tolley]Drotman2021
author author D. Drotman, author S. Jadhav,
author D. Sharp, author C. Chan, and author
M. T. Tolley, title
title Electronics-free pneumatic circuits for controlling
soft-legged robots, https://doi.org/10.1126/SCIROBOTICS.AAY2627/SUPPL_FILE/AAY2627_SM.PDF
journal journal Science Robotics volume 6, pages 2627 (year
2021b)NoStop
[Kuppens et al.(2021)Kuppens, Bessa, Herder, and Hopkins]Kuppens2021
author author P. R. Kuppens, author M. A. Bessa,
author J. L. Herder, and author J. B. Hopkins, title title Monolithic binary stiffness building
blocks for mechanical digital machines, journal journal Extreme Mechanics Letters volume 42, https://doi.org/10.1016/j.eml.2020.101120 10.1016/j.eml.2020.101120
(year 2021)NoStop
[Waheed et al.(2020)Waheed,
Myant, and Dobson]Waheed2020
author author U. Waheed, author C. W. Myant, and author S. N. Dobson, title title Boolean and/or mechanical logic using
multi-plane mechanical metamaterials, journal journal Extreme Mechanics Letters volume 40, https://doi.org/10.1016/j.eml.2020.100865 10.1016/j.eml.2020.100865
(year 2020)NoStop
[Preston et al.(2019)Preston, Jiang, Sanchez, Rothemund, Rawson, Nemitz, Lee, Suo, Walsh, and Whitesides]preston2019soft
author author D. J. Preston, author H. J. Jiang,
author V. Sanchez, author P. Rothemund, author
J. Rawson, author M. P. Nemitz, author W.-K. Lee, author Z. Suo, author C. J. Walsh, and author G. M. Whitesides, title title A soft ring oscillator, @noop
journal journal Science Robotics volume 4, pages eaaw5496 (year
2019)NoStop
[Chen et al.(2021)Chen,
Pauly, and Reis]chen_pauly_reis_2021
author author T. Chen, author M. Pauly, and author P. M. Reis, title title A reprogrammable mechanical metamaterial with
stable memory, journal journal Nature volume 589, https://doi.org/10.1038/S41586-020-03123-5 10.1038/S41586-020-03123-5
(year 2021)NoStop
[Hyatt and Harne(2023)]Hyatt2023
author author L. P. Hyatt and author R. L. Harne, title title Programming metastable
transition sequences in digital mechanical materials, journal
journal Extreme Mechanics Letters volume
59, https://doi.org/10.1016/j.eml.2023.101975
10.1016/j.eml.2023.101975 (year 2023)NoStop
[Melancon et al.(2022)Melancon, Forte, Kamp, Gorissen, and Bertoldi]Melancon2022
author author D. Melancon, author A. E. Forte,
author L. M. Kamp, author B. Gorissen, and author K. Bertoldi, title
title Inflatable origami: Multimodal deformation via
multistability (adv. funct. mater. 35/2022), journal
journal Advanced Functional Materials volume 32, https://doi.org/10.1002/ADFM.202270196
10.1002/ADFM.202270196 (year 2022)NoStop
[Van Raemdonck et al.(2023)Van Raemdonck, Milana, De Volder,
Reynaerts, and Gorissen]van2023nonlinear
author author B. Van Raemdonck, author E. Milana, author M. De Volder,
author D. Reynaerts, and author B. Gorissen, title title Nonlinear inflatable actuators for distributed
control in soft robots, @noop journal journal Advanced Materials volume 35, pages 2301487 (year 2023)NoStop
[Sun et al.(2024)Sun,
Jiang, Wang, Jiang,
Yang, Li, Ma, and Luo]Sun2024
author author Z. Sun, author T. Jiang, author Z. Wang, author
P. Jiang, author Y. Yang, author H. Li, author T. Ma, and author J. Luo, title title Soft robotic finger with energy-coupled
quadrastability, journal journal Soft
Robotics volume 11, https://doi.org/10.1089/soro.2022.0242 10.1089/soro.2022.0242 (year 2024)NoStop
[Hecke(2021)]Hecke_theory21
author author M. V. Hecke, title title Profusion of transition
pathways for interacting hysterons, https://doi.org/10.1103/PHYSREVE.104.054608/FIGURES/10/MEDIUM journal journal Physical Review E volume 104, pages 054608 (year
2021)NoStop
[Bense and van
Hecke(2021)]hecke_2021
author author H. Bense and author M. van
Hecke, title title Complex pathways and memory
in compressed corrugated sheets, https://doi.org/10.1073/pnas.2111436118 journal journal PNAS volume 118, pages
e2111436118 (year 2021)NoStop
[Yasuda et al.(2021)Yasuda,
Buskohl, Gillman, Murphey,
Stepney, Vaia, and Raney]Yasuda2021
author author H. Yasuda, author P. R. Buskohl,
author A. Gillman, author T. D. Murphey, author
S. Stepney, author R. A. Vaia, and author J. R. Raney, https://doi.org/10.1038/s41586-021-03623-y title Mechanical
computing (year 2021)NoStop
[Gou et al.(2021)Gou,
Chen, and Howell]Gou2021
author author Y. Gou, author G. Chen, and author L. L. Howell, title title A design approach to fully compliant multistable
mechanisms employing a single bistable mechanism, journal
journal Mechanics Based Design of Structures and Machines volume 49, https://doi.org/10.1080/15397734.2019.1707685
10.1080/15397734.2019.1707685 (year 2021)NoStop
[ten Wolde and Farhadi(2024)]TENWOLDE2024105626
author author M. A. ten Wolde and author D. Farhadi, title title A single-input
state-switching building block harnessing internal instabilities, https://doi.org/https://doi.org/10.1016/j.mechmachtheory.2024.105626
journal journal Mechanism and Machine Theory volume 196, pages 105626 (year 2024)NoStop
[Gorissen et al.(2020)Gorissen, Melancon, Vasios, Torbati, and Bertoldi]Gorissen2020
author author B. Gorissen, author D. Melancon,
author N. Vasios, author M. Torbati, and author
K. Bertoldi, title title Inflatable soft jumper inspired by shell snapping, journal journal Science Robotics volume 5, https://doi.org/10.1126/SCIROBOTICS.ABB1967
10.1126/SCIROBOTICS.ABB1967 (year 2020)NoStop
[Dolev et al.(2021)Dolev,
Kaynak, and Sakar]Dolev2021
author author A. Dolev, author M. Kaynak, and author M. S. Sakar, title title On‐board mechanical control systems for
untethered microrobots, journal journal
Advanced Intelligent Systems volume 3, https://doi.org/10.1002/aisy.202000233 10.1002/aisy.202000233 (year 2021)NoStop
[Pal et al.(2021)Pal,
Restrepo, Goswami, and Martinez]Pal2021
author author A. Pal, author V. Restrepo,
author D. Goswami, and author R. V. Martinez, https://doi.org/10.1002/adma.202006939 title Exploiting
mechanical instabilities in soft robotics: Control, sensing, and actuation
(year 2021)NoStop
[Bandari and Schmidt(2021)]Bandari2021
author author V. K. Bandari and author O. G. Schmidt, title title System‐engineered
miniaturized robots: From structure to intelligence, journal
journal Advanced Intelligent Systems volume 3, https://doi.org/10.1002/aisy.202000284
10.1002/aisy.202000284 (year 2021)NoStop
[Carlson et al.(2020)Carlson, Friedman, Kim, and Sung]Carlson2020
author author J. Carlson, author J. Friedman,
author C. Kim, and author C. Sung, title
title Rebound: Untethered origami jumping robot with
controllable jump height (year 2020)NoStop
|
http://arxiv.org/abs/2409.02733v1 | 20240904140520 | Characterization of Circular-arc Graphs: III. Chordal Graphs | [
"Yixin Cao",
"Tomasz Krawczyk"
] | math.CO | [
"math.CO",
"cs.DM"
] |
Reply to Comment on “A slightly oblate dark matter halo revealed by a retrograde precessing Galactic disk warp"
Haibo Yuan
September 9, 2024
===============================================================================================================
§ ABSTRACT
We identify all minimal chordal graphs that are not circular-arc graphs, thereby resolving one of “the main open problems” concerning the structures of circular-arc graphs as posed by Durán, Grippo, and Safe in 2011.
The problem had been attempted even earlier, and previous efforts have yielded partial results, particularly for claw-free graphs and graphs with an independence number of at most four.
The answers turn out to have very simple structures: all the nontrivial ones belong to a single family.
Our findings are based on a structural study of McConnell's flipping, which transforms circular-arc graphs into interval graphs with certain representation patterns.
§ INTRODUCTION
A graph is a circular-arc graph if its vertices can be assigned to arcs on a circle such that two vertices are adjacent if and only if their corresponding arcs intersect. Such a set of arcs is called a circular-arc model for this graph (Figure <ref>).
If we replace the circle with the real line and arcs with intervals, we end with interval graphs.
All interval graphs are circular-arc graphs.
Both graph classes are by definition hereditary, i.e., closed under taking induced subgraphs.
While both classes have been intensively studied, there is a huge gap between our understanding of them.
One fundamental combinatorial problem on a hereditary graph class is its characterization by forbidden induced subgraphs, i.e., minimal graphs that are not in the class.
For example, the forbidden induced subgraphs of interval graphs are holes (induced cycles of length at least four) and those in Figure <ref> <cit.>. The same problem on circular-arc graphs, however, has been open for sixty years <cit.>.
A graph is an interval graph if and only if it does not contain any hole or any graph in Figure <ref> as an induced subgraph.
It is already very complicated to characterize chordal circular-arc graphs by forbidden induced subgraphs.
Chordal graphs, graphs in which all induced cycles are triangles, are another superclass of interval graphs.
Thus, all interval graphs are chordal circular-arc graphs.
We rely on the reader to check that the long claw (Figure <ref>) and † graphs (Figure <ref>) on seven or more vertices are not circular-arc graphs.
Thus, they are chordal forbidden induced subgraphs of circular-arc graphs. Other forbidden induced subgraphs must contain a whipping top, a net (the † graph on six vertices), or a graph (Figure <ref>), which are all circular-arc graphs. However, except for adding an isolated vertex, there is no obvious way to augment them to derive forbidden induced subgraphs, not to mention enumerating forbidden induced subgraphs exhaustively.
For example, let us consider chordal forbidden induced subgraph of circular-arc graphs on ten or fewer vertices.
There are 20 such graphs. Nine of them can be obtained in the aforementioned way,
while only four of the remaining 11 have been identified in literature <cit.>. See the appendix for the list.
Bonomo et al. <cit.> characterized chordal circular-arc graphs that are claw-free.
There are only four of them. Through generalizing Lekkerkerker and Boland's <cit.> structural characterization of interval graphs, Francis et al. <cit.> defined a forbidden structure of circular-arc graphs.
This observation enabled them to characterize chordal circular-arc graphs with independence number at most four.
As we will see, most chordal forbidden induced subgraphs of circular-arc graphs contain an induced claw, and their independence numbers can be arbitrarily large.
These unsuccessful attempts motivated Durán et al. <cit.> to list the forbidden induced subgraph characterization for circular-arc graphs within the class of chordal graphs one of “the main open problems.”
Bang-Jensen and Hell <cit.> characterized proper circular-arc graphs (i.e., graphs admitting circular-arc models in which no arc properly contains another) that are chordal.
In a previous paper <cit.> of this series, we derived a full characterization of minimal split graphs that are not circular-arc graphs. Recall that a graph is a split graph if its vertex set can be partitioned into a clique and an independent set, and hence all split graphs are chordal.
The ⊗ graphs.
The main discovery of the present paper is the family of ⊗ graphs.
For k ≥ 2, the k-sun, denoted as S_k, is the graph obtained from a cycle of length 2 k by adding all edges among the even-numbered vertices to make them a clique.
The complement of S_k, denoted as S_k,
is a split graph with a unique split partition, and each vertex has precisely two non-neighbors in the other part.
See Figure <ref> for the smallest examples of S_k.
Note that S_4 and S_4 are isomorphic, while S_3 and S_3 are the net and the sun, respectively.
For k ≥ 1, we define the gadget D_k as a subgraph of S_k+1 obtained by removing one vertex of degree 2k - 1 (a solid node in Figure <ref>).
An example is illustrated in Figure <ref>, where the removed vertex was adjacent to w_1, …, w_k - 1.
The gadget D_k consists of 2 k + 1 vertices.
The vertex set {v_1, v_2, …, v_k} is a clique, hence called the clique vertices of this gadget.
The remaining vertices, {w_0, w_1, …, w_k}, is an independent set, and for i = 0, 1, …, k,
N(w_i) = {v_1, v_2, …, v_k}∖{v_i, v_i+1}.
The vertices w_0 and w_k are called the ends of this gadget; their degrees in the gadget are k - 1 (while the degrees of other vertices are k - 2 and 2 k - 2; note that k - 1 = 2 k - 2 only when k = 1).
We are now ready to describe the main family of chordal forbidden induced subgraphs of circular-arc graphs. Let p be a positive integer and ⟨ a_0, a_1, …, a_2 p - 1⟩ a sequence of 2 p positive integers. For i = 0, …, p-1, we introduce a gadget D_a_2 i and a path P_a_2i + 1.
For each gadget and each path, we arbitrarily assign the ends as the left end and the right end. (Note that the two ends are identical for a trivial path.)
We put the gadgets and paths in order circularly, and connect the right end of one gadget/path with the left end of next path/gadget.
We also introduce a special vertex c, and this concludes the vertex set.
For each gadget, we add edges between its clique vertices and all other vertices not in this gadget.
The resulting graph is denoted as ⊗(a_0, a_1, …, a_2 p - 1).
See Figure <ref> for an illustration and below are two simple examples.
* Graph ⊗(1, a) with a ≥ 2 is the † graph of order a + 4 (Figure <ref>).
Here c is the degree-one vertex at the top, the gadget D_1 comprises the other two degree-one vertices and the vertex of degree a+1, while the path comprises the remaining vertices (at the bottom).
* Graph ⊗(2, a) is the graph of order a + 5 (Figure <ref>) augmented with an isolated vertex. Here
c is the degree-two vertex at the top, the gadget D_2 comprises the other two degree-two vertices and the two vertices of degree a+3, while the path comprises the remaining vertices (at the bottom).
Illustrations of ⊗(a, b) and ⊗(1, a, 1, b) for small a and b can be found in the appendix.
Note that the order of the graph ⊗(a_0, a_1, …, a_2 p - 1) is
1 + ∑^p-1_i=0 (2 a_2 i + 1 + a_2 i + 1) = p + 1 + ∑^2 p-1_i=0 a_i + ∑^p-1_i=0 a_2 i.
Graphs ⊗(1, 1) and ⊗(1, 2) are circular-arc graphs. They are the only circular-arc graphs that arise from this construction.
Let p be a positive integer, and let G be the graph ⊗(a_0, a_1, …, a_2 p - 1), where (a_0, a_1, …, a_2 p - 1) is a sequence of positive integers different from (1, 1) and (1, 2).
* G is a chordal graph;
* G is not a circular-arc graph; and
* for any vertex x∈ V(G), the graph G - x is a Helly circular-arc graph.
All ⊗(a_0, a_1, …, a_2 p - 1) graphs that are not ⊗(1, 1) or ⊗(1, 2) are called ⊗ graphs.
It is worth noting that the ⊗ graphs defined by two different sequences might be the same. For example, the following sequences define the same ⊗ graph:
(1, 2, 3, 4), (3, 4, 1, 2), (3, 2, 1, 4), and (1, 4, 3, 2).
Our results.
The algorithm of McConnell <cit.> recognizes circular-arc graphs by transforming them into interval graphs.
Let G be a circular-arc graph and 𝒜 a fixed arc model of G.
If we flip all arcs—replace arc [, ] with arc [, ]—containing a certain point in 𝒜, we end with an interval model ℐ.
In Figure <ref>b, for example, if we flip arcs 2, 4, and 6, all containing the clockwise endpoint of the arc 4, we end with a † graph of six vertices.
A crucial observation of McConnell is that the resulting interval graph is decided by the set of vertices whose arcs are flipped and not by the original circular-arc models. For a suitable clique K,
all circular-arc models of G with certain properties lead to the same interval graph <cit.>; see also <cit.>.
It thus makes sense to denote it as G^K.
He presented an algorithm to find a suitable set K and constructed the graph directly from G, without a circular-arc model.
As we have seen, the construction is very simple when G is chordal <cit.>.
In particular, the closed neighborhood of every simplicial vertex can be used as the clique K <cit.>.
If G is a circular-arc graph, then for any simplicial vertex s, the graph G^N[s] is an interval graph. This explains why the graph ⊗(a_0, a_1, …, a_2 p - 1) is not a circular-arc graph when ℓ = ∑^2 p - 1_i=0 a_i≥ 4.
The graph (⊗(a_0, a_1, …, a_2 p - 1))^N[c] contains a hole of length ℓ; see appendix for more details.
However, G^K being an interval graph does not imply that G is a circular-arc graph.
There are restrictions on the intersection patterns of the intervals.
We <cit.> have fully characterized this correlation when G is C_4-free: it is a circular-arc graph if and only if G^K is an interval graph with no certain configurations (the list is reproduced in Section <ref>).
We use forbidden configurations to refer to minimal non-interval graphs and the configurations.
The task is then to “reverse” the construction to get all possible graphs G such that G^K contains a forbidden configuration.
One complexity is that each forbidden configuration corresponds to a large number of graphs, and the argument has to be based on case analyses.
To make it worse, there are seven families of infinite forbidden configurations: holes, † graphs, graphs, and four infinite families of interval forbidden configurations, which can be viewed as variations of † graphs and graphs.
We show that if G^N[s] contains a hole, then G must contain an ⊗ graph with ℓ≥ 4 or a graph S_k^+, k ≥ 4, i.e., the graph obtained from S_k by adding a vertex and making it adjacent to all the vertices of degree 2k -3 (solid nodes in Figure <ref>).[In a sense, the graphs S_k^+ can be viewed as ⊗(k). We may twist the definition of ⊗ graphs to include them. We use the current one for the purpose of simplicity.]
The analyses of other small forbidden configurations lead to eight small forbidden induced subgraphs.
Interestingly, none of the other infinite families of forbidden configurations leads to any new forbidden induced subgraph.
Indeed, for each of them, we can find an induced subgraph G' of G and a simplicial vertex s' of G' such that (G')^N[s'] contains a hole or a small forbidden configuration.
The main trick is to decrease the number of cases to be considered, especially finding the subgraph G' and vertex s' for infinite forbidden configurations.
The main result of the present paper is as follows.
For a graph G, we use G^⋆ to denote the graph obtained from G by adding an isolated vertex.
A chordal graph is a circular-arc graph if and only if it does not contain an induced copy of long claw, whipping top^⋆, a graph in Figure <ref>, S_k^+, k ≥ 3, or an ⊗ graph.
We also characterize chordal graphs that are not Helly circular-arc graphs.
We use the Venn diagram in Figure <ref> to illustrate the relationship of these classes.
Joeris et al. <cit.> showed that region 1 comprises all the complements of k-suns, and we <cit.> have previously settled regions 2 and 3.
A split graph is a circular-arc graph if and only if it does not contain an induced copy of any graph in Figure <ref>–<ref>, S_k^+, k ≥ 3, or ⊗(a, b) with 1 ≤ b ≤ 2 ≤ a.
Theorem <ref> summarizes all graphs in regions 2–5.
Note that all ⊗ graphs are in Regions 2 and 4 by Proposition <ref>, and a chordal forbidden induced subgraph of circular-arc graphs is in region 3 or 5 if and only if it contains an induced copy of some graph in region 1.
Interestingly, region 5 comprises a single graph.
Region 4 comprises long claw, whipping top^⋆, the graph in Figure <ref>, ⊗(a, b), b ≥ 3, and ⊗(a_0, a_1, …, a_2 p - 1) with p ≥ 2.
Region 5 comprises only the graph in Figure <ref>.
Regions 1, 2, and 4 together are the minimal chordal graphs that are not Helly circular-arc graphs.
A chordal graph is a Helly circular-arc graph if and only if it does not contain an induced copy of long claw, whipping top^⋆, a graph in Figures <ref>, <ref>, <ref>, S_k, k ≥ 3, or an ⊗ graph.
Let us put our work into context. By imposing restrictions on the intersection pattern between arcs, more than a dozen subclasses of circular-arc graphs have been defined and studied in the literature <cit.>.
Several of them have been characterized by forbidden induced subgraphs.
They are mostly in the lower levels of the class hierarchy.
The class of chordal circular-arc graphs is the second subclass that contains all interval graphs, and the only previous one was normal Helly circular-arc graphs <cit.>. The characterizations of both subclasses are achieved through connecting the forbidden induced subgraphs and minimal non-interval graphs.
§ PRELIMINARIES
All graphs discussed in this paper are finite and simple. The vertex set and edge set of a graph G are denoted by, respectively, V(G) and E(G).
For a subset U⊆ V(G), we denote by G[U] the subgraph of G induced by U, and by G - U the subgraph G[V(G)∖ U], which is shortened to G - v when U = {v}.
The neighborhood of a vertex v, denoted by N_G(v), comprises vertices adjacent to v, i.e., N_G(v) = { u | uv ∈ E(G) }, and the closed neighborhood of v is N_G[v] = N_G(v) ∪{ v }.
We may drop the subscript if the graph is clear from the context.
The complement graph G of a graph G is defined on the same vertex set V(G), where a pair of distinct vertices u and v is adjacent in G if and only if u v ∉E(G).
A clique is a set of pairwise adjacent vertices, and an independent set is a set of vertices that are pairwise nonadjacent.
A graph is a split graph if its vertex set can be partitioned into a clique and an independent set.
We say that a vertex v is simplicial if N[v] is a clique; such a clique is necessarily maximal.
A hole is an induced cycle of length at least four.
Let G be a chordal graph.
For each simplicial vertex s of G, we can use the clique N[s] to define the auxiliary graph G^N[s].
We use G^s as a shorthand for G^N[s].
The vertex set of G^s is V(G), and the edge set is defined as follows.[The definition of G^s is different from <cit.>, where two vertices u, v∈ N_G[s] might be adjacent in G^s because u v is an edge of a C_4 in G. They are equivalent on C_4-free graphs.]
* The edges among vertices in V(G)∖ N_G[s] are the same as in G.
* A pair of vertices u, v∈ N_G[s] are adjacent in G^s if there exists a vertex adjacent to neither of them, i.e., N_G(u)∪ N_G(v) V(G).
* A pair of vertices u∈ N_G[s] and v∈ V(G)∖ N_G[s] are adjacent in G^s if N_G[v]⊈N_G[u].
Two quick remarks on the conditions are in order.
First, note that N(u)∪ N(v) = V(G) if and only if N[u]∪ N[v] = V(G) and uv∈ E(G).
Second, N_G[v]⊈N_G[u] if and only if either they are not adjacent, or there exists a vertex adjacent to v but not u in G.
Having two or more graphs on the same vertex set demands caution when we talk about adjacencies. The only exception is the adjacency between a pair of vertices in V(G)∖ N_G[s]. Since such a pair has the same adjacency in G and G^s, we omit specifying the graph to avoid unnecessary clumsiness.
We now relate the main result of <cit.>.
Each graph in Figure <ref> carries annotations. Some vertices are earmarked from N_G(s) or from V(G)∖ N_G[s], and some edges between N_G(s) and V(G)∖ N_G[s] are annotated.
Note a normal graph can be viewed as an annotated graph with no annotations.
The graph G^s contains an annotated copy of an annotated graph F if there exists an isomorphism φ between F and an induced subgraph of G^s with the following properties.
* If v∈ N_G(s) then φ(v)∈ N_G(s).
* If v∈ V(G)∖ N_G[s] then φ(v)∈ V(G)∖ N_G[s].
* If an edge v u is annotated, then φ(v) φ(u) is an edge of G.
By a forbidden configuration we mean a minimal non-interval graph or a graph in Figure <ref>. Note that the vertex s is not involved in any forbidden configuration of G^s. Since it is universal in G^s, it cannot be in any minimal non-interval subgraph.
Most forbidden configurations in Figure <ref> do have universal vertices, but each universal vertex in them is the end of an annotated edge, which cannot happen for s.
The following are equivalent on a chordal graph G.
* The graph G is a circular-arc graph.
* For every simplicial vertex s, the graph G^s does not contain any annotated copy of forbidden configurations.
* There exists a simplicial vertex s such that G^s does not contain any annotated copy of forbidden configurations.
§ THE PROOF
Throughout this section G is a chordal graph with no universal vertices and
we fix an arbitrary simplicial vertex s of G.
We rely on the reader to check the small graphs listed in Theorem <ref>, namely, long claw, whipping top^⋆, and those in Figure <ref>,
are minimal forbidden induced subgraphs.
It is also easy to check graphs S_k^+, k ≥ 3, thanks to the strong symmetries <cit.>.[A quick argument is as follows.
In S_k^+, every maximal clique is the closed neighborhood of some simplicial vertex.
Thus, if S_k^+ is a circular-arc graph, then it has to be a Helly circular-arc graph, but it is already known that S_k is not <cit.>. The minimality follows from a similar observation.]
It is not that straightforward to show that all the ⊗ graphs are minimal forbidden induced subgraphs of circular-arc graphs.
We defer the proof of Proposition <ref> to the appendix.
The rest of this section is focused on the sufficiency of Theorem <ref>, i.e., the completeness of this list.
By Theorem <ref>, if G is not a circular-arc graph, then G^s contains an annotated copy of a forbidden configuration.
By translating forbidden configurations in G^s back to graphs in G, we show that G must contain an induced copy of some graph listed in Theorem <ref>.
The trivial case is on forbidden configurations disjoint from N_G(s), which are necessarily non-interval graphs.
If G^s - N_G[s] is not an interval graph, then G contains an induced long claw, net^⋆, whipping top^⋆, ⊗(1, a), a ≥ 3, or ⊗(2, a), a ≥ 1.
By construction, G - N_G[s] and G^s - N_G[s] are isomorphic. Thus, G contains a minimal non-interval graph; let U be its vertex set.
We are done if G[U] is a long claw or a † graph on seven or more vertices; i.e., ⊗(1, a), a ≥ 3. Otherwise, since G is chordal, G[U] is a net, whipping top, or a graph by Theorem <ref>. Thus, G[U∪{s}] is isomorphic to net^⋆, whipping top^⋆, or ⊗(2, a), respectively.
For other forbidden configurations, we need vertices not in them.
For example, the subgraph of G induced by the four vertices in Figure <ref> is also a claw, hence a circular-arc graph. As a matter of fact, any graph on four vertices are circular-arc graphs. There are other vertices that help the formation of the forbidden configurations, though they are invisible in the forbidden configurations.
An edge of G^s is collateral if it is an egde of G and at least one of its ends is in N_G[s].
Note that no edge among V(G)∖ N_G[s] is collateral.
If one end v of an edge is in N_G[s], then the edge is not collateral if and only if the other end is adjacent to neither s nor v in G.
Let v_1 v_2 be a collateral edge.
By construction there must be a vertex adjacent to neither of them when v_1, v_2∈ N_G[s], or a vertex in N_G[v_2]∖ N_G[v_1] when only v_1 is in N_G[s].
In other words, there is always a vertex w∈ V(G)∖ N_G[s] such that w v_i∈ E(G), i = 1, 2, if and only if v_i∉N_G[s].
The vertex w can be viewed as a “witness” of this edge.
We can generalize the concept of witnesses to any clique K of G^s.
Let K be a clique of G^s.
A vertex w∈ V(G)∖ N_G[s] is a witness of K (in G^s) if N_G[w]∩ K and N_G[s]∩ K partition K.
When K comprises the two ends of an edge of G^s, we say that w is a witness of the edge (in G^s).
The clique or edge is then witnessed by w (in G^s).
When the graph G^s is clear from context, we usually omit its reference.
Let w be a witness of a clique K.
By definition, no edge between w and K is collateral.
We remark that the witness of a clique can be from this clique.
Indeed, if w∉K, then w is a witness of K∪{w}, which is a clique of G^s by construction.
Another remark is that every collateral edge needs a witness, but a witnessed edge may or may not be collateral.
Table <ref> lists some simple examples (note that the meaning of lines are different from forbidden configurations in Figure <ref>).
As said, when dealing with a forbidden configuration F, we need to take witnesses of the collateral edges within F into consideration.
A witness of one particular collateral edge might be adjacent to additional vertices in F, and witnesses of distinct collateral edges could either be adjacent or not.
These relationships pose significant challenges in our analysis.
To mitigate these complexities, we will introduce a series of observations.
The first of these is quite intuitive: note that two simplicial vertices are adjacent if and only if they have the same closed neighborhood.
Let K be a clique of G^s such that K⊆ N_G(s).
* If K is witnessed, then it has a witness that is simplicial in G.
* If K is not witnessed, then G^s contains an induced sun.
(i) We fix a clique tree of G, and root it at N_G[s]. Let w be a witness of K. We take an arbitrary maximal clique K' of G that contains w. No maximal clique in the subtree rooted at K' contains any vertex from K. We can find a leaf node and take a vertex x that appears only in this node. The vertex x is simplicial in G.
(ii) Let K' be a minimal subset of K that is unwitnessed.
Note that |K'| > 2: by assumption, G does not contain a universal vertex; a clique of two vertices are the endpoints of an edge and hence has a witness by construction.
We take three vertices v_1, v_2, and v_3 from K'.
By the selection of K', for each i = 1, 2, 3, the clique K'∖{v_i} has a witness x_i.
We may assume x_i, i = 1, 2, 3, is simplicial in G, and hence they are pairwise nonadjacent.
Since x_i is not a witness of K', it cannot be adjacent to v_i in G^s (we are using the fact that x_i is simplicial in G).
Then G^s[{v_1, v_2, v_3, x_1, x_2, x_3}] is a sun.
Each collateral edge has one or more witnesses, and we designate one as the (designated) witness of this edge. For a collateral edge between two vertices in N_G(s), we always designate a simplicial vertex of G (Lemma <ref>).
As shown in Figure <ref>, neither assertion of Lemma <ref> holds for cliques of G^s intersecting V(G)∖ N_G[s].
The next three propositions are on witnesses of edges and cliques involving vertices in V(G)∖ N_G[s]. The first is about vertices in N_G[s] and their non-neighbors in G.
Let v be a vertex in N_G[s]. For each vertex x∈ V(G)∖ N_G[v], it holds N_G^s[x]⊆ N_G^s[v].
Note that x∉N_G[s] because N_G[s]⊆ N_G[v] by the definition of simplicial vertices.
Suppose for contradiction that there exists a vertex y∈ N_G^s[x]∖ N_G^s[v].
Note that y∈ N_G(s)∩ N_G(x); otherwise, v y must be an edge of G^s, witnessed by x.
By construction, there must be a witness w of the edge x y.
Note that w∈ N_G(v), as otherwise it witnesses the edge v y.
But then x w v y is a hole of G, a contradiction.
We consider next the subgraphs of G corresponding to connected induced subgraphs of G^s of order three.
It is trivial when none of the three vertices is from N_G(s).
Lemma <ref> has characterized the case when they are all from N_G(s) and form a clique in G^s.
The next statement is about a triangle of G^s with two vertices from N_G(s); see the second and third groups in Table <ref>.
Let {v_0, v_1, v_2} be a clique of G^s with v_1, v_2∈ N_G(s) and v_0∈ V(G)∖ N_G[s].
If the clique {v_0, v_1, v_2} is not witnessed, then it is a clique of G.
Since v_0 is not a witness of {v_0, v_1, v_2}, it is adjacent to at least one of v_1 and v_2 in G.
Assume without loss of generality that
v_0∈ N_G(v_1). By construction, there is a witness w_2 of v_0 v_1; hence w_2∈ N_G(v_0)∖ N_G[v_1]. Since w_2 does not witness {v_0, v_1, v_2}, it is adjacent to v_2 in G.
Then v_0 and v_2 must be adjacent in G because it is chordal; otherwise, w_2 v_0 v_1 v_2 is a hole.
The last observation is on two collateral edges with different witnesses sharing a vertex.
Let v_1 v_0 v_2 be a path of G^s such that both edges v_0 v_1 and v_0 v_2 are collateral and |{v_1, v_2}∩ N_G(s)| 1.
* If v_0 v_1 and v_0 v_2 have a common witness, then v_1 v_2∈ E(G); and
* otherwise, there is no edge between a witness of v_0 v_1 and a witness of v_0 v_2.
(i) Let w be a common witness of v_0 v_1 and v_0 v_2.
If v_1, v_2∈ N_G(s), then they are adjacent in G because N_G(s) is a clique. Now suppose v_1, v_2∉N_G(s); then v_0∈ N_G(s).
Since v_1 v_0 v_2 w is not a hole of G, vertices v_1 and v_2 must be adjacent.
(ii) For i = 1, 2, let w_i be a witness of v_0 v_3-i.
By assumption, w_i is not a witness of v_0 v_i.
In particular, w_1 w_2.
Suppose for contradiction that w_1 and w_2 are adjacent.
If v_1, v_2∈ N_G(s), then v_1 v_2 w_2 w_1 is a hole of G.
In the rest, v_1, v_2∉N_G(s).
Since w_i, i = 1, 2, does not witness the edge v_0 v_i, it is neither identical nor adjacent to v_i.
Depending on whether v_1 and v_2 are adjacent, either v_1 v_2 w_1 w_2 or v_1 v_0 v_2 w_1 w_2 is a hole of G, contradicting that G is chordal.
In the disposition of a forbidden configuration of G^s,
a simple but important trick is to focus on a “minimal” set of vertices that are required for the formation of the forbidden configuration under concern. As long as we keep the vertex set of the forbidden configuration, a witness for each collateral edge in it, and the simplicial vertex s, we still have the same forbidden configuration. Note that removing a vertex may remove collateral edges, of which the vertex is the only witness, but it will never introduce new edges to the auxiliary graph.
The operation of keeping only a minimal set of vertices is especially handy if the resulting subgraph of G is a split subgraph, because we <cit.> have characterized all forbidden induced subgraphs that are split graphs.
Another natural idea is to assume that there is no any forbidden configuration that has been discussed.
Formally, we introduce the following assumptions.
Induction For any vertex set A⊆ V(G) and any simplicial vertex x of the subgraph G[A], the graph (G[A])^x does not contain an annotated copy of any forbidden configuration discussed previously.
Minimality For any vertex set A with s∈ A⊊ V(G), the graph (G[A])^s does not contain any annotated copy of the forbidden configuration under discussion.
These assumptions greatly simplify our discussions.
For example, they have the following implications.
Under the assumptions Induction and Minimality, the following hold when we discuss a forbidden configuration F.
* For every simplicial vertex x of G, the subgraph G - N[x] is an interval graph.
*
Every annotated copy of F in G^s contains all the vertices in N_G(s).
*
Let G^s[U] be an annotated copy of F, and x a simplicial vertex of G^s[U].
If x is not annotated, then there cannot be a witness w of N_G^s[x]∩ U such that x w and N_G^s[x]∩ U = N_G^s[w]∩ U.
P<ref> follows from Induction. If G - N[x] is not an interval graph, then G^x contains an induced non-interval graph disjoint from N_G[x], which has been discussed in Proposition <ref>.
P<ref> follows from Minimality.
Suppose for contradiction that there is a vertex set U such that N_G(s)⊈U and G^s[U] is an annotated copy of F.
Then, the subgraph of (G - N_G(s)∖ U)^s induced by U is an annotated copy of forbidden configuration as F.
P<ref> follows from Minimality.
If such a vertex w exists, then the graph (G - x)^s still contains an annotated copy of F, with x replaced with w. Note that N_G^s[w]∩ U is a clique, and no edge in G^s[U] outside this clique is witnessed by x.
We start with holes and the sun.
If G^s is not chordal, then G contains an induced copy of S_ℓ^+ with ℓ≥ 4 or ⊗(a_0, a_1, …, a_2 p - 1) with ∑ a_i≥ 4.
Let U be the vertex set of a hole of G^s.
We number vertices in U such that the hole is v_0⋯ v_|U| - 1.
In this proof, indices of vertices in U are modulo |U|.
Note that for each vertex v_i∈ U∩ N_G(s), both edges v_i - 1 v_i and v_iv_i + 1 are collateral by Proposition <ref>.
For i = 0, …, |U| - 1,
if v_i v_i+1 is a collateral edge, let w_i be its witness.
Let W denote the set of witnesses.
By Minimality,
V(G) = U∪ W∪{s}.
First, we argue that the witness w_i of a collateral edge v_i v_i+1 does not have other neighbors on this hole in G^s.
That is,
N_G^s(w_i) ∩ U = {v_i, v_i+1}.
We may assume v_i∈ N_G(s), and the other is symmetric.
By Proposition <ref>, N_G^s(w_i) ∩ U⊆{v_i-1, v_i, v_i+1}.
If v_i-1∈ N_G^s(w_i), then replacing v_i with w_i in U leads to another hole, violating P<ref>.
Second, we argue that
W is an independent set (in both G and G^s).
Suppose for contradiction that there are distinct i, j∈{0, …, |U| - 1} such that w_i w_j∈ E(G).
If both v_i and v_i+1 are from N_G(s), then {v_i, v_i+1}⊆ N_G^s(w_j) by Proposition <ref>. Since {v_i, v_i+1}{v_j, v_j+1}, we have a contradiction to (<ref>).
It is similar when both v_j, v_j+1∈ N_G(s).
Thus, only one of {v_i, v_i+1} and only one of {v_j, v_j+1} are in N_G(s).
We may assume without loss of generality that v_j is from N_G(s) and v_j+1 is not.
Note that v_j⊆ N_G^s(w_i) by Proposition <ref> and v_i = v_j - 1 by (<ref>).
This contradicts Proposition <ref>(ii), applied to the path v_j - 1 v_j v_j + 1.
If U⊆ N_G(s), then U∪{s} is a clique, and G is a split graph by (<ref>) and (<ref>). Thus, G contains an induced S_|U|^+ <cit.>.
In the rest, U contains vertices from both N_G(s) (by P<ref>) and V(G)∖ N_G[s].
Note that the number of edges on the hole between N_G(s) and V(G)∖ N_G[s] is even.
Let it be 2 p for some positive integer p.
Removing all these 2 p edges from the hole leaves a sequence of (possibly trivial) paths.
For i = 0, 1, …, 2 p-1, let V_i denote the vertex set of the ith path.
Note that V_i is either a subset of N_G(s) or disjoint from V(G)∖ N_G[s], and if V_i⊆ N_G(s), then V_i+12 p⊆ V(G)∖ N_G[s], and vice versa.
We may assume without loss of generality that V_0⊆ N_G(s). Hence,
N_G(s) = V_0∪ V_2∪⋯∪ V_2p - 2.
For i = 0, …, p - 1, all the edges on the hole incident to V_2 i are collateral (Proposition <ref>); let W_2i denote these |V_2i|+1 witnesses.
Note that W = W_0∪ W_2∪⋯∪ W_2 p - 2.
Two of the witnesses are adjacent to V_2i-12 p and V_2i+1, and let them be w_2i^1 and w_2i^2, respectively.
The subgraph G[V_2i∪ W_2i] is the gadget D_|V_2i|, the ends of which are w_2i^1 and w_2i^2.
On the other hand, G[V_2i+1] is the same simple path as G^s[V_2i+1].
The vertices in V_2 i are complete to vertices not in this gadget, while the two ends of G[V_2i+1] are connected to w_2i^2 and w_2i+22 p^1, respectively.
Thus, G is isomorphic to ⊗(|V_0|, |V_1|, …, |V_2p - 1|), with c = s.
If G^s contains an induced sun, then G contains an induced copy of a graph specified in Theorem <ref>.
Let U denote the vertex set of the sun, and we number them as follows.
The degree-four vertices are v_0, v_1, and v_2, and for i = 0, 1, 2, the only vertex in U∖ N_G^s[v_i] is x_i.
We claim that
{v_0, v_1, v_2}∩ N_G(s)∅.
Suppose for contradiction that {v_0, v_1, v_2} is disjoint from N_G(s).
By P<ref>, {x_0, x_1, x_2}∩ N_G(s)∅.
Assume without loss of generality that x_0∈ N_G(s).
We note that {v_1, v_2, x_0} is not witnessed; if w is a witness of {v_1, v_2, x_0}, then N_G^s(w) ∩ U = {v_1, v_2, x_0} by Proposition <ref>, violating P<ref>.
For i = 1, 2, let w_i be the witness of the edge x_0 v_i; note that N_G^s(w_i)∩ U = {x_0, v_i} by Proposition <ref>.
By Proposition <ref>(ii), w_1 and w_2 are not adjacent.
But then (G - {x_1, x_2})^s contains a sun, violating Minimality.
By Proposition <ref>, the set {v_0, v_1, v_2} is always a clique in G.
Thus, an edge among {v_0, v_1, v_2} is collateral if and only if at least one of its ends is in N_G[s].
For distinct i, j∈{0, 1, 2}, let w_ij be the witness of the edge v_i x_j if it is collateral, and let W denote the set of witnesses.
Note that U and W are disjoint by Proposition <ref>.
We claim that
V(G) = U∪ W∪{s}.
Let H= (G[U∪ W∪{s}])^s.
By construction, H[U] is a subgraph of G^s[U], in which all the edges incident to x_i, i∈{0, 1, 2}, are present.
By Induction, H[U] is chordal, which means that {v_0, v_1, v_2} is a clique.
Let i and j be two distinct indices in {1, 2, 3}, and let k = 3-i-j. We note that
w_i j v_j∈ E(G) if and only if v_j∈ N_G(s).
It follows from Proposition <ref> when neither v_j nor x_j is in N_G(s), or Proposition <ref> otherwise. As a result, if v_i v_j is collateral, then its witness must be from {x_k, w_i k, w_j k}.
Note that if x_k is a witness of v_i v_j, then neither v_i x_k nor v_j x_k is collateral, and hence x_k is the only witness of v_i v_j.
In the rest, we show that this is always the case.
Suppose that v_i v_j is a collateral edge and let w be its witness.
We claim that
w = x_k.
We start with showing that
if w x_k, then {w_i k, w_j k} = {w}, v_k∈ N_G(s) and w v_k∈ E(G^s).
Note that when x_k is not a witness of v_iv_j, at least one of v_i x_k and v_j x_k is collateral.
The other may or may not be collateral, and hence one of w_i k and w_j k is undefined.
We may assume without loss of generality that v_i∈ N_G(s).
By (<ref>) and (<ref>), w∈{w_i k, w_j k}.
In either case, w is a witness of {v_i, v_j, x_k}.
By Minimality, w is the witness of both edges v_i x_k and v_j x_k, if collateral.
By P<ref>,
{v_i, v_j, x_k}⊊ N_G^s(w).
By Proposition <ref>, N_G^s(w)∩ U⊆ U∖{x_i}.
If v_k∉N_G^s(w), then v_j ∈ N_G^s(w) and w v_j v_k x_j is a hole of G^s, violating Induction.
Thus, v_k∈ N_G^s(w) and v_k∈ N_G(s) by (<ref>).
This concludes (<ref>).
By (<ref>), w v_k is collateral and needs a witness w'.
Now that all the edges among v_1, v_2, and v_3 are collateral, each vertex in W is a witness of some of them.
Since w'∈ N_G^s(v_k)∩ N_G^s(w), it cannot be x_i (recall that v_i∈ N_G(s)).
If w' is a witness of v_j v_k, then w w' v_i v_k is a hole of G by (<ref>) (with indices switched).
If w' = x_j, then (G - v_i)^s contains an induced sun, with v_i replaced by w, violating Minimality.
If none of above is true, w' has to be a witness of v_i v_k and is different from x_j.
Then w w' v_j v_k is a hole of G by (<ref>) (note that v_j∈ N_G(s) and it is symmetric to the previous one).
Thus, (<ref>) must be true, and x_k is the only witness of v_i v_j.
We will see that G is always a split graph.
If two or more in {v_0, v_1, v_2} are from N_G(s), then {x_0, x_1, x_2}⊆ V(G)∖ N_G[s], and V(G) = U∪{s}. Thus, G is a split graph, with the clique {v_0, v_1, v_2}.
Hence, assume without loss of generality that v_0∈ N_G(s), while v_1 and v_2 are not.
If x_0∉N_G(s), then V(G) = U∪{s}, and G is again a split graph, with the clique {v_0, v_1, v_2}.
In the rest, x_0∈ N_G(s). By Proposition <ref>, both v_1 and v_2 are adjacent to x_0 in G.
By P<ref> and Proposition <ref>, the clique {v_1, v_2, x_0} cannot be witnessed.
Note that w_1 0 and w_2 0 are not adjacent by Proposition <ref>.
Hence, V(G) = U∪{s, w_1 0, w_2 0}, and G is again a split graph, with the clique {v_0, v_1, v_2, x_0}.
After the sun, the assumptions enable us to impose a further constraint on simplicial vertices of forbidden configurations.
As one can easily observe from Figure <ref>,
a simplicial vertex in a minimal non-interval graph has degree at most two.
In each forbidden configuration in Figure <ref>, a simplicial vertex has at most three neighbors, and all simplicial vertices with three neighbors have been annotated to be from V(G)∖ N_G[s].
Please be reminded that a simplicial vertex of a forbidden configuration is not necessarily simplicial in G^s or G.
Under the assumptions Induction and Minimality, the following hold when we discuss a forbidden configuration F.
*
There is an annotated copy of F in G^s in which all simplicial vertices are from V(G)∖ N_G(s).
Let U be the vertex set of an annotated copy of F. We have nothing to do if no simplicial vertex of G^s[U] is from N_G(s).
Suppose that a vertex v∈ N_G[s] is simplicial in G^s[U].
A quick check of the forbidden configurations convince us that |N_G^s(v)∩ U|≤ 2 and no vertex in N_G^s(v)∩ U is simplicial in G^s[U]. Thus, all the edges in G^s[U] incident to v are collateral.
Let K = N_G^s[v]∩ U.
We note that
K is not witnessed.
Otherwise, there is a witness w of K (by Lemma <ref>), and N_G^s(w)∩ U = K by Proposition <ref>.
But then replacing v with w leads to an annotated copy of F with one fewer vertex from N_G(s), violating P<ref>.
By (<ref>), N_G^s(v)∩ U consists of two vertices, and at least one of them is not in N_G[s].
Let N_G^s(v)∩ U = {x_1, x_2}. A quick check of the forbidden configurations convince us that there always exists a vertex x_3 that is adjacent to x_1 and x_2 but not v in G^s.
For i = 1, 2, we can find a witness w_i of the edge v x_i. We argue that
G^s[{v, x_1, x_2, x_3, w_1, w_2}] is a sun,
violating Induction.
By Proposition <ref>, N_G^s(w_i)∩ U ⊆{v, x_1, x_2} for i = 1, 2.
They cannot be equal; otherwise, we can replace v with w_i, violating P<ref> (note that v is in N_G(s) and cannot witness any edge).
If w_1 and w_2 are adjacent, then x_1 x_2 w_2 w_1 is a hole in G^s. This concludes (<ref>) and the proof.
We now deal with the remaining forbidden configurations one by one.
Thanks to P<ref>, we assume that no simplicial vertex for the forbidden configurations is from N_G[s].
We may separate one forbidden configuration into different incarnations.
In particular, we deal with the smallest ones of the infinite families (†, , and Figures <ref>–<ref>) separately.
For example,
the smallest of Figure <ref> are
Figures <ref> and <ref>.
Moreover, Figure <ref> is only a special incarnation of Figure <ref>, and the general one will be Figure <ref>.
Sometimes we group multiple forbidden configurations together, then Minimality applies to all of them.
Throughout we use U to denote the vertex set of an annotated copy of the forbidden configuration under discussion.
For i = 1, 2, 3, let w_i be the witness of the edge v u_i.
By Proposition <ref>, w_1, w_2, and w_3 are distinct, and there is no edge among them.
By Minimality, V(G) = U∪{s, w_1, w_2, w_3}.
The graph G-s is a long claw, where the vertex v has degree three.
Since G^s does not contain an annotated copy of Figure <ref>,
the edge u_1 v_2 is not collateral, and at most one of u_2 v_1 and u_3v_1 is collateral.
If neither u_2 v_1 nor u_3v_1 is collateral, then for i = 2, 3, we can find a witness w_i of the clique {u_i, v_1, v_2} (Proposition <ref>).
By Proposition <ref>, w_2 and w_3 are not adjacent.
Also note that neither of them is adjacent to u_1; otherwise, u_1 v_1 v_2 u_i w_i is a hole of G.
Then G[U∪{w_2, w_3}] is a long claw, centered at v_2.
In the rest, assume without loss of generality that u_2 v_1 is collateral and u_3 v_1 is not.
For i = 1, 3, we can find a witness w_i of the clique {u_i, v_1, v_2} by Proposition <ref>.
Since G is chordal, w_1 and w_3 are distinct and nonadjacent.
If the clique {u_2, v_1, v_2} has a witness w_2, then
w_1 and w_2 are not adjacent by Proposition <ref> (considering the path u_1 v_1 u_2).
In this case, G[U∪{w_1, w_2}] is isomorphic to Figure <ref>, where v_1 v_2 u_2 is the triangle, and u_1 has degree two.
Now that the clique {u_2, v_1, v_2} is not witnessed, for i = 1, 2, let w'_i be the witness of the edge v_i u_2.
Since w'_i is not a witness of {u_2, v_1, v_2}, it must be adjacent to v_3 - i in G.
By Proposition <ref>, w'_1 and w'_2 are not adjacent.
By Proposition <ref>, neither w_1 and w'_1 nor w_3 and w'_2 can be adjacent.
If w_3 and w'_1 are not adjacent, then G[{s, v_1, v_2, u_2, w'_1, w'_2, w_3}] is isomorphic to sun^⋆, where w_3 is the isolated vertex.
It is similar if w_1 and w'_2 are not adjacent.
Otherwise, G[{s, v_1, u_2, w_1, w_3, w'_1, w'_2}] is isomorphic to Figure <ref>, where v_1 u_2 w'_2 is the triangle and w'_1 has degree two.
Since G^s does not contain an annotated copy of Figure <ref>, for i,j∈{1, 2, 3}, the edge u_i v_j is collateral if and only if i = j.
Thus, G[U∪{s}] is an S_3^+.
Figures <ref> and <ref> are the smallest incarnations of Figure <ref>, whose general case will be discussed as Figure <ref>.
Figure <ref> is the first example of Figure <ref>, and the other is Figure <ref>.
We discuss the forbidden configurations in Figures <ref> and <ref> as a group, because they reduce to each other.
We separate Figure <ref> into two cases, depending on whether one of edges v x_1 and v x_2 is collateral.
We start with Figure <ref> in which neither v x_1 nor v x_2 is collateral. Note that for i = 1, 2, the vertex x_i is a witness of the edge v u_i.
We argue that the edge v x_0 is not collateral.
Suppose for contradiction that x_0 v∈ E(G), and let w be the witness of v x_0.
By Proposition <ref> (considering the path x_0v u_1), w is not adjacent to u_1 (note that w is a witness of v u_1 if w and u_1 are adjacent).
By Proposition <ref>(ii), x_1 is not adjacent to w.
By symmetry, u_2, x_2∉N_G(w).
Thus, N_H(w)∩ U = {v, x_0}, violating P<ref>.
By Minimality, V(G) = U∪{s}.
Then G is isomorphic to net^⋆, where x_0 is isolated.
We then consider Figure <ref>.
Note that
x_0 v is not collateral and x_1 v is collateral.
First, if x_0 v is collateral, then G^s - x_1 is an annotated copy of Figure <ref>.
Second, if x_1∉N_G(v), then it is a witness of both edges u_1 v and u_2 v, violating Proposition <ref>(i).
If the clique {v, u_1, x_1} is not witnessed, we can find a witness w_1 of the edge v u_1 and a witness w_2 of the edge v x_1.
Note that neither w_1 x_1 nor w_2 u_1 is an edge of G, and w_1 and w_2 are not adjacent by Proposition <ref>(ii).
Then G^s[{x_0, v, w_1, u_1, x_1, w_2}] is an annotated copy of Figure <ref>; note that w_1, w_2∉N_G(v).
It is symmetric if {v, x_1, u_2} is not witnessed.
Now that {v, u_1, x_1} and {v, u_2, x_1} are both witnessed, let w_1 and w_2 be witnesses of them, respectively.
By Minimality, V(G) = U∪{s, w_1, w_2}.
By Proposition <ref> (considering the path u_1 v u_2), none of u_1 w_2, w_1 u_2, and w_1 w_2 can be an edge of G.
Thus, G is isomorphic to whipping top^⋆, where w_1 u_1 v u_2 w_2 is the path of length five, and the degrees of s, x_0, and x_1 are 1, 0, and 5, respectively.
Finally, we consider Figure <ref> in which at least one of v x_1 and v x_2 is collateral.
If v x_i, i = 1, 2, is collateral,
then G^s[U] - x_3 - i is an annotated copy of Figure <ref>.
First, note that neither v x_1 nor v x_4 is collateral.
If v x_1 or v x_4 is collateral, then the subgraph G^s[U] - x_3 or G^s[U] - x_2, respectively, is an annotated copy of Figure <ref>.
In the graph (G[U∪{s}])^s, both edges v x_1 and v x_4 and edges on the path x_1 x_2 u x_3 x_4 are present. Since (G[U∪{s}])^s is chordal, all the edges v u, v x_2, and v x_3 must be present.
By Minimality, V(G) = U∪{s}.
The witness of v u must be from N_G(u)∩ U∖ N_G[s] = {x_2, x_3}.
Assume without loss of generality that x_2 witnesses v u.
Then
{s, u}⊆ N_G(v) ⊆{s,u, x_3}.
Note that x_1 x_2 u x_3 x_4 is a path in G.
Then G is a long claw when x_3∉N_G(v), or isomorphic to Figure <ref> otherwise.
We show that this cannot happen: the existence of an annotated copy of Figure <ref> violates some assumption.
Since G^s does not contains an annotated copy of Figure <ref>,
the edge x_0 v is not collateral.
Suppose first that both {v, u_1, x_1} and {v, u_2, x_1} are witnessed. For i = 1, 2, let w_i be a witness of {v, u_i, x_1}. By Minimality, V(G) = U∪{s, w_1, w_2}.
Then N_G(x_0) = {x_1}, and
N_G[x_1] ∖{x_0}⊆ N_G[v] = {s, v, x_1, u_1, u_2}.
In the graph G^x_0, the path w_1 u_1 v u_2 w_2 is induced.
Since neither w_1 nor w_2 is in N_G(x_1), the vertex x_1 is adjacent to u_1, u_2, w_1, and w_2 in G^x_0.
Thus, either v u_1 x_1 u_2 is a hole in G^x_0, or G^x_0 - x_0 is an annotated copy of Figure <ref>, both violating Induction.
In the rest, at least one of the triangles in G^s[U] is not witnessed.
Suppose without loss of generality that {v, u_1, x_1} is not witnessed.
By Proposition <ref>, the edge u_1 x_1 is also collateral.
We find witnesses w_1 and w_2 of the edges v u_1 and u_1 x_1, respectively.
We also find a witness w_3 of the edge v u_2.
Let w_4 = w_3 if w_3 witnesses the clique {v, u_2, x_1}, or a witness of edge u_2 x_1 otherwise (the edge u_2 x_1 must be collateral when the clique is not witnessed by Proposition <ref>).
By Minimality,
V(G) = U∪{s, w_1, w_2, w_3, w_4}.
By Proposition <ref>, w_1 cannot be adjacent to w_2 (considering the path v u_1 x_1) or w_3 (considering the path u_1 v u_2).
For the same reason, w_3 and w_4 are not adjacent when they are different.
By Proposition <ref>, N_G^s(w_2)∩ U ⊆{v, u_1, u_2, x_1}.
They being equal violates P<ref> (we can replace x_1 with w_2), and it violates P<ref> if N_G^s(w_2)∩ U = {v, u_1, x_1}.
Since v∈ N_G^s(w_2) when u_2∈ N_G^s(w_2) (because G^s is chordal),
N_G^s(w_2)∩ U = {u_1, x_1}.
By Proposition <ref> (considering the path u_1 v u_2), w_1 and u_2 are not adjacent.
Since G^s[U∪{w_1, w_2}∖{x_0}] is not an induced sun, w_1 x_1 is a collateral edge.
Then w_1 and w_4 cannot be adjacent (considering the path w_1 x_1 u_2 when w_4 w_3).
But then there cannot be a witness of w_1 x_1, a contradiction.
Figure <ref> is the net graph, i.e., the smallest incarnation of † graphs, and Figure <ref> is a smallest incarnation of Figure <ref>, with the edge v x_2 collateral.
Their general ones will be Figure <ref> and Figure <ref>, respectively.
|
http://arxiv.org/abs/2409.02404v1 | 20240904030613 | Learning Privacy-Preserving Student Networks via Discriminative-Generative Distillation | [
"Shiming Ge",
"Bochao Liu",
"Pengju Wang",
"Yong Li",
"Dan Zeng"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CR"
] |
IEEE Transactions on Image Processing
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Learning Privacy-Preserving Student Networks via Discriminative-Generative Distillation
Shiming Ge, Senior Member, IEEE,
Bochao Liu,
Pengju Wang,
Yong Li,
and Dan Zeng Senior Member, IEEE
Shiming Ge, Bochao Liu, Pengju Wang and Yong Li are with the Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100095, China, and with School of Cyber
Security at University of Chinese Academy of Sciences, Beijing 100049, China. Email: {geshiming, liubochao, wangpengju, liyong}@iie.ac.cn.
Dan Zeng is with the School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China. E-mail: [email protected]. Email: [email protected].
Y. Li is the corresponding author. (e-mail: [email protected])
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
While deep models have proved successful in learning rich knowledge from massive well-annotated data, they may pose a privacy leakage risk in practical deployment. It is necessary to find an effective trade-off between high utility and strong privacy. In this work, we propose a discriminative-generative distillation approach to learn privacy-preserving deep models. Our key idea is taking models as bridge to distill knowledge from private data and then transfer it to learn a student network via two streams. First, discriminative stream trains a baseline classifier on private data and an ensemble of teachers on multiple disjoint private subsets, respectively. Then, generative stream takes the classifier as a fixed discriminator and trains a generator in a data-free manner. After that, the generator is used to generate massive synthetic data which are further applied to train a variational autoencoder (VAE). Among these synthetic data, a few of them are fed into the teacher ensemble to query labels via differentially private aggregation, while most of them are embedded to the trained VAE for reconstructing synthetic data. Finally, a semi-supervised student learning is performed to simultaneously handle two tasks: knowledge transfer from the teachers with distillation on few privately labeled synthetic data, and knowledge enhancement with tangent-normal adversarial regularization on many triples of reconstructed synthetic data. In this way, our approach can control query cost over private data and mitigate accuracy degradation in a unified manner, leading to a privacy-preserving student model. Extensive experiments and analysis clearly show the effectiveness of the proposed approach.
Differentially private Learning, teacher-student learning, knowledge distillation.
§ INTRODUCTION
Deep learning <cit.> has delivered impressive performance in image recognition <cit.> due to the powerful capacity of deep networks on learning rich knowledge from large-scale annotated data. However, the deployment of deep models may suffer from the leakage risk of data privacy. Recent works <cit.> have shown that the private information in the training data can be easily recovered with a few access to the released model. Thus, many real-world requirements <cit.> need to provide high-performance models while protecting data privacy. Thus, it is necessary to explore a feasible solution that can address a key challenge for model deployment: how to effectively learn a privacy-preserving deep model without remarkable loss of inference accuracy?
Compared to traditional learning solutions that directly access to private data and lead to privacy leakage in released model (in red in Fig. <ref>), the privacy-preserving learning solutions usually add privacy protection strategies or avoid released model (in green in Fig. <ref>) directly access to private data during training. Towards this end, many existing approaches have been proposed, which are mainly based on differential privacy <cit.>. According to the privacy-preserving strategy, they can be roughly grouped into two categories: the implicit category and explicit category.
The “implicit” approaches leverage differentially private learning to train models from private data by enforcing differential privacy on the weights or gradients during training. The prior approach is introducing differential privacy into stochastic gradient descent to train deep networks <cit.>. Later, Chen <cit.> proposed a stochastic variant of classic backtracking line search algorithm to reduce privacy loss. Papernot <cit.> proposed to replace the unbounded ReLU activation function with a bounded tempered sigmoid function to retain more gradient information. Some recent works proposed gradient operations for private learning by denoising <cit.>, clipping <cit.>, perturbation <cit.> or compression <cit.>. Typically, these approaches have promising privacy protection, but often suffer from a big drop in inference accuracy over their non-private counterparts. By contrast, the “explicit” approaches pretrain models on private data and then use auxiliary public or synthetic data to learn the released models with knowledge transfer by enforcing differential privacy on the outputs of pretrained models <cit.>. Papernot <cit.> proposed private aggregation of teacher ensembles (PATE) to learn a student model by privately transferring the teacher knowledge with public data, while Hamm <cit.> proposed a new risk weighted by class probabilities estimated from the ensemble to reduce the sensitivity of majority voting. Zhu <cit.> proposed a practically data-efficient scheme based on private release of k-nearest neighbor queries, which can avoid the decline of accuracy caused by partitioning training set. These approaches generally can improve the model performance while massive unlabeled public data with the same distribution as private data are available. However, these public data are difficult to obtain and the models trained in this way are still at risk of malicious attacks. Recent works <cit.> trained a private generator and used the generated synthetic data to replace the auxiliary public data. Generally speaking, the most important process in the explicit category is transferring sufficient knowledge from private data to auxiliary data with minimal privacy leakage. Thus, the key issue that needs to be carefully addressed is applying reliable models to extract knowledge from private data and exploring effective auxiliary data to transfer knowledge.
Inspired by this fact, we propose a teacher-student learning approach to train privacy-preserving student networks via discriminative-generative distillation, which applies discriminative and generative models to distill private knowledge and then explores generated synthetic data to perform knowledge transfer (Fig. <ref>). The objective is to enable an effective learning that achieves a promising trade-off between high model utility and strong privacy protection. As shown in Fig. <ref>, the student is trained by using two streams. First, discriminative stream trains a baseline classifier on all private data and an ensemble of separate teachers on disjoint subsets, while generative stream takes the baseline classifier as a fixed discriminator and trains a generator in a data-free manner. Massive synthetic data are then generated with the generator and used to train a variational autoencoder (VAE) <cit.>. After that, a few of the synthetic data are fed into the teacher ensemble to query labels with Laplacian aggregation, while most of the synthetic data are fed into VAE to achieve massive data triples by perturbing the latent codes. Finally, a semi-supervised learning is performed by simultaneously handling two tasks: knowledge transfer via supervised data classification, and knowledge enhancement via self-supervised model regularization.
In summary, our approach can effectively learn privacy-preserving student networks by three key components. First, data-free generator learning is incorporated to generate massive synthetic data. These synthetic data are difficult to be identified from appearance but have similar distribution with private data in discriminative space. Therefore, the student learning does not involve any private data and the synthetic data do not expose the information of private data even if they are recovered. Second, differential privacy is incorporated to provide a strong privacy guarantee theoretically. In Laplacian aggregation of teacher ensemble, student’s access to its teachers is limited by reducing label queries, so that the student’s exposure to teachers’ knowledge can be meaningfully quantified and bounded. Third, tangent-normal adversarial regularization is adopted to improve the capacity and robustness of student. In semi-supervised student learning, synthetic data are embedded into the pretrained VAE space and reconstructed from latent codes by adding perturbation along both tangent and normal directions of distribution manifold. Then, the tangent regularization can enforce the local smoothness of the student along the underlying manifold and improve model accuracy, while the normal regularization imposes robustness on the student against noise. In this way, the two regularization terms complement each other, jointly facilitating knowledge transfer from the teacher ensemble to the student. Our approach provides a unified framework to learn privacy-preserving student networks. The data-free generator learning and Laplacian aggregation can protect the private data, and adversarial regularization via VAE reconstruction of the synthetic data can better learn data manifold. Combining them together can protect private data while reducing the impact of noisy labels and instability of the synthetic data.
Our major contributions are three folds: 1) we propose a discriminative-generative distillation approach to train privacy-preserving student networks that achieves an effective trade-off between high utility and strong privacy, 2) we propose to combine data-free generator learning and VAE-based model regularization which facilitates knowledge transfer in a semi-supervised manner, and 3) we conduct extensive experiments and analysis to demonstrate the effectiveness of our approach.
§ RELATED WORKS
The approach we proposed in this paper aims to learn privacy-preserving student networks by distilling knowledge from private data and transferring it to synthetic data. Therefore, we briefly review related works from three aspects, including differentially private learning, learning with synthetic data and teacher-student learning.
§.§ Differentially Private Learning
Differentially private learning <cit.> aims to address tasks like healthcare <cit.> where the data are private and the learning process meets differential privacy requirements. Differential privacy provides a guarantee that two adjacent databases produce statistically indistinguishable results under a reasonable privacy budget.
Previous works <cit.> considered using differential privacy in machine learning settings. Shokri <cit.> introduced a privacy-preserving distributed stochastic gradient descent (SGD) algorithm which applies to non-convex models. Its privacy bound is decided by the number of model parameters that are related to the representation ability of the model, leading to an inefficient trade-off between privacy and model capacity. Abadi <cit.> provided a stricter bound on the privacy loss induced by a noisy SGD by introducing moments accountant. Papernot <cit.> proposed a general framework named private aggregation of teacher ensembles (PATE) for private training. PATE uses semi-supervised learning to transfer the knowledge of the teacher ensemble to the student by using a differentially private aggregation. It uses the assumption that the student has access to additional unlabeled data. To reduce erroneous aggregation results, Xiang <cit.> proposed a private consensus protocol by returning only the highest voting results above a threshold in aggregation of teacher ensembles, leading to accuracy improvement under the same privacy level. Gao <cit.> improved PATE to securely and efficiently harness distributed knowledge by using lightweight cryptography, which can achieve strong protection for individual labels. Miyato <cit.> proposed virtual adversarial training to avoid the requirements of label information, which reduces queries to the privacy model and protects data privacy. Jagannathan <cit.> combined Laplacian mechanism with decision trees and proposed a random forest algorithm to protect privacy. The idea of differentially private learning can suggest the usage of data for training models under a certain privacy budget.
§.§ Learning with Synthetic Data
With the development of generative adversarial networks (GANs) <cit.>, recent works began to use synthetic data in training deep networks. Zhang <cit.> found that the performance of classifiers trained in a semi-supervised manner using synthetic data could not be guaranteed and proposed Bad GAN to preferentially select the generator, which greatly improves the feature matching of GANs. Dumoulin <cit.> proposed to jointly learn a generation network and an inference network using synthetic data generated by generation network, achieving a very competitive performance. Salimans <cit.> presented a variety of architectural features and training procedures, which improves the performance of both classifier and generator. Kumar <cit.> proposed to estimate the tangent space to the data manifold using GANs and employ it to inject invariances into the classifier, which can greatly improve in terms of semantic similarity of the reconstructed samples with the input samples. Luo <cit.> introduced smooth neighbors on teacher graphs, which improves the performance of classifier through the implicit self-ensemble of models. Qi <cit.> presented localized GAN to learn the manifold of real data, which could not only produce diverse image transformations but also deliver superior classification performance. The works <cit.> used differentially private stochastic gradient descent (DPSGD) to train GANs, which has been proven effective in generating high-dimensional sanitized data <cit.>. However, DPSGD relies on carefully tuning of the clipping bound of gradient norm, , the sensitivity value. Specifically, the optimal clipping bound varies greatly with model architecture and training dynamics, making the implementation of DPSGD difficult. In order to solve this problem, Chen <cit.> used Wasserstein GANs <cit.> for a precise estimation of the sensitivity value, avoiding the intensive search of hyper-parameters while reducing the clipping bias. Generally, these approaches aim to generate synthetic data to facilitate model learning, while the privacy issue introduced by generated data is less considered.
§.§ Teacher-Student Learning
Typically, teacher-student learning applies knowledge distillation <cit.> to learn a more compact student model by mimicking the behaviors of a complex teacher model. It is used for model compression while hardly degrading the model performance. In the vanilla knowledge distillation, by using the softmax output of the teacher network as soft labels instead of hard class labels, the student model can learn how the teacher network behaves given tasks in a compact form. Since then, many works <cit.> had used and improved this training method. Romero <cit.> proposed to add an additional linear projection layer. Tian <cit.> proposed to combine contrastive learning with knowledge distillation. The teacher-student learning manner has been applied in many applications, such as low-resolution face recognition <cit.>, action recognition <cit.>, semantic segmentation <cit.>, data generation <cit.> and molecular generation <cit.>. For circumstances when training data for the teacher are unavailable in practical problems such as privacy, Chen <cit.> proposed a data-free knowledge distillation framework. It regards the pretrained teacher networks as a fixed discriminator and trains a generator to synthesize training samples for the student. To protect the privacy of the data, some works utilize structural improvements <cit.>, such as training a collection of teacher models <cit.>. Recently, the distillation idea is used to control privacy loss <cit.>. The key issue in learning privacy-preserving models with distillation is to make knowledge transfer adequately and privately.
§ THE APPROACH
§.§ Problem Formulation
Given a private dataset 𝒟, the objective is learning a privacy-preserving student ϕ_s that does not reveal data privacy and has the capacity approximating to the baseline model ϕ_b trained directly on 𝒟. To achieve that, we introduce both discriminative and generative models to enforce the knowledge transfer via discriminative-generative distillation with two streams. Discriminative stream partitions 𝒟 into n disjoint subsets 𝒟 = {𝒟_i}_i=1^n and learns an ensemble of multiple teachers ϕ_t={ϕ_t,i}_i=1^n where ϕ_t,i is trained on 𝒟_i. Generative stream takes ϕ_b as a fixed discriminator and learns a generator ϕ_g to generate massive synthetic data 𝒟̂. A VAE {ϕ_e,ϕ_d} is pretrained on synthetic data, where ϕ_e and ϕ_d are the encoder and decoder respectively. The pretrained VAE is used to obtain data distribution information to facilitate model learning. To reduce the privacy budget, only a few of synthetic data 𝒟̂_s⊂𝒟̂ are used to query the teacher ensemble and get the noisy labels ℒ̂_s. The other unlabelled data 𝒟̂_̂û = 𝒟̂∖𝒟̂_̂ŝ with |𝒟̂_u| ≫ |𝒟̂_s| are employed to provide manifold regularization, with the help of VAE. Thus, the student learning can be formulated by minimizing an energy function 𝔼:
𝔼(𝕎_s;𝒟̂) =
𝔼_s(ϕ_s(𝕎_s;𝒟̂_s);ℒ̂_s) +
𝔼_u(ϕ_s(𝕎_s;ϕ_d(ϕ_e(𝒟̂_u))), ϕ_s(𝕎_s;ϕ_d(ℙ[ϕ_e(𝒟̂_u))])),
where 𝕎_s is the parameters of student, ℙ[·] is the perturbation operator, 𝔼_s and 𝔼_u are supervised energy term and unsupervised energy term, respectively.
We can see that the risk of privacy leakage can be effectively suppressed due to the isolation between the released student model and private data. The supervised energy term can enforce knowledge transfer on the class-related characteristics, while the unsupervised energy term performs self-supervised regularization to enhance knowledge. Towards this end, we solve Eq. eq:problem via three steps, including: 1) data-free generator learning to get synthetic data 𝒟̂ and train a VAE {ϕ_e,ϕ_d}, 2) teacher ensemble learning to achieve the labels ℒ̂_s by differentially private aggregation, and 3) semi-supervised student learning to get 𝕎_s.
§.§ Data-Free Generator Learning
Knowledge transfer from the model trained on private data or synthetic data generated by GAN pretrained on private data may lead to privacy leakage. Meanwhile, the models trained on public data may cause significant accuracy degradation due to distribution mismatch, since finding public data that match the distribution with private data often is very difficult <cit.>. Moreover, the resulting models are vulnerable to attacks since adversaries can also access public data. Thus, we aim to learn a generator in a data-free manner
to generate synthetic data, which does not compromise privacy to assist in knowledge transfer from private data to learn released models.
Unlike traditional GAN training where the discriminator is an online learned two-class classifier, our data-free generator learning first pretrains a baseline multi-class classifier ϕ_b (with parameters 𝕎_b) on private data 𝒟 that serves as the fixed discriminator, and then train the generator ϕ_g without data. It is suggested that the tasks of discrimination and classification can improve each other and the multi-class classifier can learn the data distribution better than the two-class discriminator <cit.>. Thus, the key of using the multi-class classifier as discriminator is defining a loss to evaluate the generated data. Towards this end, we assess a synthetic example 𝐱=ϕ_g(𝕎_g;𝐳) generated by ϕ_g with parameters 𝕎_g from a random vector 𝐳 by the following loss:
ℒ(𝐱)= ℓ(ϕ_b(𝕎_b;𝐱),max_j(ϕ_b(𝕎_b;𝐱))_j) +
αϕ_b(𝕎_b;𝐱)logϕ_b(𝕎_b;𝐱) + βϕ_b(𝕎_b^-;𝐱)_1,
where α and β are the tuning parameters to balance the effect of three terms, and we set them as 5 and 0.1 respectively. The first term ℓ(.) is cross entropy function that measures the one-hot classification loss, which enforces the generated data having similar distribution as the private data. The second term is the information entropy loss to measure the class balance of generated data. The third term uses l_1-norm *_1 to measure the activation loss, since the features ϕ_b(𝕎_b^-;𝐱) that are extracted by the discriminator and correspond to the output before the fully-connected layer tend to receive higher activation value if input data are real rather than some random vectors, where 𝕎_b^-⊂𝕎_b is the discriminator's backbone parameters. Then, using the fixed discriminator, the generator learning is performed iteratively via five steps:
* randomly generate a batch of noise vectors : {𝐳_i}^m_i=1.
* generate synthetic samples {𝐱_i}^m_i=1 for training: 𝐱_i=ϕ_g(𝕎_g;𝐳_i).
* apply the discriminator on the mini-batch: 𝐲_i=ϕ_b(𝕎_b;𝐱_i).
* calculate the loss function with Eq. eq:generatorloss on mini-batch: ∑_iℒ(𝐱_i).
* update weights 𝕎_g using back-propagation.
In this way, the synthetic data 𝒟̂ generated by the learned generator have a similar distribution to private data without compromising privacy. Fig. <ref> shows some examples. The synthetic data are very helpful for student learning, which can greatly improve accuracy compared to using public data and reduce accuracy loss compared to using private data directly. With 𝒟̂, we train a VAE {ϕ_e, ϕ_d}, where the encoder ϕ_e with parameters 𝕎_e and decoder ϕ_d with parameters 𝕎_d are constructed with convolutional neural networks like <cit.>.
§.§ Teacher Ensemble Learning
Instead of using a single model as teacher that may lead to privacy leakage <cit.>, we learn an ensemble of teachers for knowledge transfer. Towards this end, we partition the private data 𝒟 into n disjoint subsets {𝒟_i}_i=1^n and then separately train a teacher ϕ_t,i with parameters 𝕎_t,i on each subset 𝒟_i, leading to the teacher ensemble ϕ_t={ϕ_t,i}_i=1^n.
In general, the number of teachers n has an impact on knowledge extraction from private data. When n is too large, the amount of each training subset data gets less and the teachers may be underfitted. When n is too small, it will make the noise of differential privacy more influential and lead to unusable aggregated labels. Thus, the teacher number n should be carefully set in experience.
The teacher ensemble serves to label query, where the synthetic data 𝐱∈𝒟̂_s is fed and the predicted labels by multiple teachers are privately aggregated:
l=max_k{𝒱_k({ϕ_t,i(𝕎_t,i;𝐱)}_i=1^n)+ Lap(2/ε_0)},
where 𝒱_k(·) counts the votes of the query being predicted as class k by all n teachers, the final predicted label l is noisy and used to supervise the student training, a low privacy budget ε_0 is used to adjust privacy protection and Lap(2/ε_0) denotes the Laplacian distribution with location 0 and scale 2/ε_0.
For student training, each example from the query data 𝒟̂_s is fed into the teacher ensemble and then the prediction is privately aggregated via Laplacian aggregation in Eq. eq:Laplacian, leading to ℒ̂_s={l_i}_i=1^|𝒟̂_s|. Directly using the maximum value of vote counts as labels may leak privacy, so we add random noise to the voting results to introduce ambiguity. Intuitively, this means that multiple teachers jointly determine the query result, making it difficult for adversary to recover the training data. In addition to this, our approach can provide the same or stronger privacy guarantee than many state-of-the-arts <cit.> while reducing accuracy degradation by knowledge enhancement with an extra model regularization. It also means that our approach will have a less privacy cost when delivering student models with the same accuracy.
§.§ Semi-Supervised Student Learning
To reduce privacy leakage, we only use a few of synthetic data 𝒟̂_s for label query. Thus, the teacher knowledge that transfers from private data to 𝒟̂_s is not only noisy due to Laplacian aggregation but also insufficient due to limited data. To enhance knowledge transfer, we learn the student in a semi-supervised fashion by adding another unsupervised pathway.
Each synthetic example 𝐱_j∈𝒟̂_u is embedded into VAE space and get a mean vector μ_j and a standard deviation vector σ_j with {μ_j,σ_j}=ϕ_e(𝕎_e;𝐱_j) that form a normal distribution 𝒩(μ_j,σ_j). Then, the data is reconstructed from a sampled code 𝐞_j as well as its perturbed versions along tangent and normal directions of the distribution manifold, leading to massive data triples 𝒯={(𝐱̂_j,𝐱̂_j^∥,𝐱̂_j^⊥)}_j=1^|𝒟̂_u| with:
𝐱̂_j=ϕ_d(𝕎_d;ℳ(𝐞_j)), 𝐱̂_j^*=ϕ_d(𝕎_d;ℳ(𝐞_j+𝐧_j^*)),
where the mapping operator ℳ(·) projects the code into decoder input, 𝐧_j^* is random perturbation noise along tangent direction (*=∥) or normal direction (*=⊥). Then, the semi-supervised student learning is performed with {𝒟̂_s,ℒ̂_s} and 𝒯. The supervised energy in Eq. eq:problem can be formulated as
𝔼_s = ∑_i=1^|𝒟̂_s|ℓ(ϕ_s(W_s;𝐱_i),l_i), s.t. 𝐱_i ∈𝒟̂_s, l_i ∈ℒ̂_s,
and the unsupervised energy is formulated as
𝔼_u = ∑_j=1^|𝒟̂_u|ϕ_s(W_s^-;𝐱̂_j)-ϕ_s(W_s^-;𝐱̂_j^⊥)^2
+∑_j=1^|𝒟̂_u|ϕ_s(W_s^-;𝐱̂_j)-ϕ_s(W_s^-;𝐱̂_j^∥)^2
+∑_j=1^|𝒟̂_u|ϕ_s(W_s;𝐱̂_j)logϕ_s(W_s;𝐱̂_j),
where W_s^-⊂W_s is backbone parameters of the student for extracting features. We can see that the unsupervised energy Eq. eq:unsup includes normal regularization, tangent regularization and entropy regularization. The first two regularization terms enhance model robustness against perturbations along orthogonal and parallel directions to the underlying data manifold respectively, while entropy regularization ensures the student output more determinate predictions. This tangent-normal adversarial regularization by adding perturbation to the latent layer can make the student vary smoothly along tangent space and have strong robustness along normal space <cit.>.
The total energy comes from two streams. Discriminative stream employs the supervised energy for knowledge transfer from teacher ensemble with a few of query examples, while generative stream takes the unsupervised energy for knowledge enhancement via model regularization. The differentially private aggregation provides privacy protection, while the usage of VAE embedding and reconstruction can obtain the characteristics of the data in the tangent and normal spaces. In particular, generative stream applies self-supervised learning from massive unannotated data to compensate the knowledge that may miss in discriminative stream.
§.§ Discussion
Practical Deployment. To learn a privacy-preserving student, our approach trains it from synthetic data generated with a generator pretrained in a data-free manner. Typically, the learning could be deployed in a single server where the private data are partitioned into several subsets to train an ensemble of teachers. Moreover, the learning is also suitable to deploy for jointly training models from distributed clients via a trusted server as the coordinator. In this case, each client trains a teacher on its private local data and all teachers form the teacher ensemble, while the trusted server aggregates the local data via centered learning or local knowledge via federated learning <cit.> to pretrain a baseline classifier that is used to train a generator in a data-free manner. Then, the server applies the generator to generate massive synthetic data that are used to pretrain a VAE. After that, the server splits the synthetic data into two parts: a few of them are distributed to local clients to query labels by noisy aggregation in discriminative stream, and most of them are fed into generative stream for VAE reconstruction to get massive synthetic data triples. Finally, the student is trained on noisy labels and synthetic data triples in a semi-supervised manner within the trusted server. By allowing only student to be accessible to adversaries, the trained student could be deployed on practical applications and gives the differential privacy guarantee, introduced next.
Privacy Analysis. According to the learning process in two streams, the total privacy budget contains two parts. Discriminative privacy budget is computed as PATE <cit.>, achieving ε_0-differential privacy via Eq. eq:Laplacian and getting (|𝒟̂_s|ε_0^2+ε_0√(-2|𝒟̂_s|logδ),δ)-differential privacy over |𝒟̂_s| queries for all δ∈ (0,1) <cit.>. Generative privacy budget is computed according to the latent code perturbation in VAE construction. By taking the synthetic data in generative stream as a sequence, we achieve ε_1-differential privacy by adding Laplacian noise with scale 2c/ε_1 to the normalized latent codes, where c is the dimension of latent codes. It could be explained as follow. According to Laplacian mechanism and post-processing theorem <cit.>, we have: for any two different images 𝐱_j and 𝐱_j^' as well as possible reconstructed output 𝐱̂_j, the VAE reconstruction mechanism 𝒜 satisfies Pr[𝒜(𝐱_j)=𝐱̂_j] ≤exp(ε_1) ·Pr[𝒜(𝐱_j^')=𝐱̂_j] where Pr[·] is the probability function. Then, we have the theorem.
The sequence of VAE reconstruction mechanism 𝒜, denoted as 𝒜(𝒟̂_u) satisfies ε_1-differential privacy.
For any two adjacent datasets 𝒟̂_u and 𝒟̂_u^' where 𝐱_j∈𝒟̂_u and 𝐱_j^'∈𝒟̂_u^' are the two only different images, we have
Pr [ 𝒜(𝒟̂_u) ⊆𝒪]
= Pr[𝒜(𝒟̂_u ∩𝒟̂_u^')⊆𝒪] ·Pr[𝒜(𝐱_j)=𝐱̂_j]
≤ exp(ε_1 ) ·Pr[𝒜(𝒟̂_u ∩𝒟̂_u^')⊆𝒪] ·Pr[𝒜(𝐱_j^')=𝐱̂_j]
= exp (ε_1) ·Pr[𝒜(𝒟̂_u^') ⊆𝒪],
where 𝒪 denotes the subset of possible outputs. Eq. (<ref>) indicates that 𝒜(𝒟̂_u) satisfies ε_1-differential privacy according to the definition of differential privacy <cit.>. Further, according to the composition theorem <cit.>, our approach finally satisfies (|𝒟̂_s|ε_0^2+ε_0√(-2|𝒟̂_s|logδ)+ε_1,δ)-differential privacy and gives the differential privacy guarantee.
§ EXPERIMENTS
To verify the effectiveness of our proposed discriminative-generative distillation approach DGD, we conduct experiments on four datasets (MNIST <cit.>, Fashion-MNIST <cit.> (FMNIST), SVHN <cit.> and CIFAR-10 <cit.>) and perform comparisons with 13 state-of-the-art benchmarks, including 6 explicit approaches that train models with generative data (DP-GAN <cit.>, PATE-GAN <cit.>, GS-WGAN <cit.>, G-PATE <cit.> and DP-MERF <cit.>, DataLens <cit.>), and 7 implicit approaches that train models with differentially private learning (DPSGD <cit.>, zCDP <cit.>, GEDDP <cit.>, DP-BLSGD <cit.>, RDP <cit.>, TSADP <cit.> and GEP <cit.>). Here, all explicit approaches but DP-GAN apply teacher-student learning to distill models, while all implicit approaches perform differentially private learning without model distillation. To make the comparisons fair, our experiments use the same experimental settings as these benchmarks and take the results from their original papers. Note that original PATE <cit.> conducted experiments with private data to simulate public data, thus we just compare to it in component analysis experiment.
MNIST and FMNIST are both 10-class datasets containing 60K training examples and 10K testing examples. The examples are 28× 28 grayscale handwriting digit images or fashion images. SVHN is a real-world 32×32 color digit image dataset that contains 73257 training examples, 26032 testing examples and 531131 extra training examples. CIFAR10 consists of 60K 32×32 color images in 10 classes, including 50K for training and 10K for testing.
For each dataset, we take its training examples as private data and directly learn a baseline classifier as the discriminator as well as an ensemble of teachers, and then transfer the teacher knowledge to learn student. We set the Laplacian noise scale to be 2/ε_0=40. We generate the same number of synthetic data as the private training data with the learned generator by randomly generating latent codes and then feeding into the generator to require synthetic data, , generating 60K synthetic images for MNIST. In VAE reconstruction, we set c=32 and ε_1=0.01. The models are evaluated on testing examples with privacy cost, classification accuracy and accuracy drop with respect to baseline.
We mainly use simple network structures that are the same to the benchmarks for teachers and student to conduct the experiments. On MNIST and FMNIST, the networks of baseline and teachers have the same structure, which contains two 3×3 convolutional layers (with ReLU activation and max-pooling, and 64 and 128 channels, respectively) to extract features hierarchically, followed by the softmax output layer indicating 10 classes. Each convolutional layer has the stride of 1 with 1-padding and are randomly initialized with Xavier. On SVHN, we add two extra fully-connected layers (with 384 and 192 neurons) with ReLU. We use Adam optimization algorithm to learn all models, and set batch size as 128. To learn all teachers, the iteration rounds are 3000, the learning rate is first set to 0.05 and decays linearly with iteration round to 0. For generator learning, the iteration rounds are 200, the learning rate is first set to 0.2 and 10 times decays every 80 rounds. To learn VAE and student, the iteration rounds are 500, the learning rate is first set to 0.001 and decays linearly with iteration round to 0. The teacher number on these three datasets is 250. We also study a complex structure for teachers and conduct an experiment on CIFAR10 with 100 teachers. Here, we fine-tune a vision transformer <cit.> on CIFAR10 training set and modify the dimension of the last fully-connected layer to 10. The model is pretrained on ImageNet and gives a top-1 classification accuracy of 81.4%.
§.§ State-of-the-Art Comparison
We conduct comparisons with 6 explicit approaches under different privacy budget on MNIST and FMNIST and 7 implicit approaches on CIFAR10. The performance is evaluated with test accuracy of student and accuracy drop with respect to its baseline under the condition of (ε,δ)-differential privacy. Here, ε is privacy budget and δ is failure probability. A lower privacy budget means a stronger privacy guarantee.
Comparisons with 6 Explicit Approaches. In the comparisons, we check the performance under different privacy budget, and report the results in Tab. <ref>. Our approach takes 1300 queries under ε=10.0 and 27 queries under ε=1.00, respectively. All approaches are under a low failure probability δ=10^-5. The test accuracy of baseline model is 99.2% on MNIST and 91.0% on FMNIST.
From Tab. <ref>, under the same condition of high privacy budget, we can see that our student achieves the highest test accuracy of 97.4% on MNIST and 88.2% on FMNIST, which remarkably reduces the accuracy drop by 1.80% and 2.80% respectively. It shows that our approach has the best privacy-preserving ability and minimal accuracy drop.
Under the same low privacy budget, all approaches suffer from accuracy drop with respect to their counterparts under high privacy budget. However, our student still delivers the highest test accuracy and the lowest accuracy drop on both datasets. These results imply that discriminative stream plays an important role in knowledge transfer from private data. First, discriminative stream provides class identity supervision thus we cannot just use generative stream. Second, it uses certain queries to balance privacy protection and model accuracy.
Comparisons with 7 Implicit Approaches. In addition, we conduct experimental comparison on CIFAR10 and report in Tab. <ref>, where our approach achieves the highest accuracy of 73.6% under the lowest privacy budget of 3.00. The main reason comes from that our approach adopts an extra generative stream to enhance knowledge transfer with massive synthetic data generated by a data-free learned generator. In this way, the missing knowledge can be recovered from generative stream and the accuracy can be improved.
§.§ Component Analysis
After the promising performance achieved, we further analyze the impact of each component in our approach, including label query, generator learning, teacher ensemble, noisy aggregation, and VAE-based regularization.
Label Query. To study query effect on the trade-off between model accuracy and privacy protection, we compare the student learning under 27, 750, 1000 and 3000 queries. We treat the label of a generated example by the teacher ensemble as a query. The query number determines the privacy budget and failure probability, and we use differential privacy with moments accountant <cit.> as metric. More queries will cost a larger privacy budget and fixed query number will lead to constant privacy cost. The results are shown in Fig. <ref>. They are as expected where a higher privacy budget leads to a higher model accuracy. Besides, in our approach, the private information that the delivered student can directly access is the noisy teachers' prediction outputs who pass through Laplacian aggregation. The results also reveal that our student learning by dicriminative-generative distillation can be performed robustly and consistently under different label queries and providing a certain number of examples (, 750) can lead to an impressive accuracy of 95.2%. It is very helpful in many practical applications where a few of samples are available for sharing. Therefore, our approach can effectively learn privacy-preserving student models and control accuracy drop.
Generator Learning. To study the effect of data generation, we conduct student learning on MNIST, FMNIST and CIFAR10 with the raw private data as well as synthetic data generated by four generative approaches, including ACGAN <cit.>, WGAN <cit.>, InfoGAN <cit.> and our Data-Free learned generator. The results are shown in Fig. <ref> and some generated examples can be seen in Fig. <ref>. It is easy to distinguish the images generated by ACGAN, WGAN and InfoGAN, implying that these generators learned with private data may expose data privacy in spite of achieving higher accuracy. By contrast, the synthetic images generated by data-free learned generator are hardly identified by human. Thus, the data-free learned generator effectively protects privacy while delivering comparable accuracy since it matches the distribution of private data in discriminative space.
Beyond the generator learning method, we further check the impact of synthetic data. Towards this end, via the generator trained with the baseline on MNIST as the fixed discriminator, we generate 8 synthetic datasets with various amounts to train students and report their performance in Fig. <ref>. We can find that the model accuracy increases by training on more synthetic data and gets smooth after the used synthetic data reaches 60K that is equal to the number of private training examples. Therefore, we generate the same number of synthetic data as the private training data to provide a good trade-off between model performance and training efficiency.
Teacher Ensemble. We check the effect of teacher number on the accuracy of teacher ensemble. The top left of Fig. <ref> shows the results on three datasets for evaluating the effect on simple and complex classification tasks. We find that the accuracy increases along with teacher number within a certain range, indicating that the model performance can be boosted by increasing teacher number in a certain range. It is very helpful in real-world applications like federated learning <cit.> where the private model can be improved by adding the sharing data parties. The performance starts to degrade when the teacher number increases to a certain value. Then the amount of partitioned training data for each teacher starts to become inadequate for learning, suggesting careful selection of the teacher number.
VAE-based Regularization. To check the effect of VAE on student learning, we modify our approach for comparing to PATE <cit.> with Laplacian aggregation as well as its improved variant PATE+ <cit.> with Gaussian aggregation under the same experimental settings. Towards this end, we remove the generator and feed private training data (simulating public unlabeled data like <cit.>) into VAE to learn students with Laplacian and Gaussian aggregation, leading to two modified approaches denoted as DGD and DGD+, respectively. We train various students on MNIST and SVHN under different privacy budget and conduct the comparisons. In our experiments, a few of training data serve as queries in noisy aggregation and the remaining most of the training data are fed into VAE where each example is reconstructed into a synthetic triple. The results are reported in Fig. <ref>, where using VAE in our DGD and DGD+ can consistently improve model accuracy over PATE and PATE+ without sacrificing privacy guarantee, respectively. For example, DGD+ delivers an accuracy of 92.7% on SVHN that is very close to 92.8% achieved with baseline, implying the effectiveness of VAE-based regularization, since it can provide self-supervised knowledge enhancement to compensate the accuracy drop. We also achieve higher accuracy by Gaussian aggregation than Laplacian aggregation (, PATE+ vs. PATE, and DGD+ vs. DGD) as stated in <cit.>, implying the significance of noisy aggregation, introduced next.
Noisy Aggregation. During the student learning, the obtained query labels are disturbed by noise with a scale of 2/ε_0. Theoretically, the higher the noise scale is, the lower privacy cost and better privacy protection is. However, too high noise scale may cause label distortion, making student difficult to learn useful knowledge and unsuitable for practical deployment. We study how the noise scale 2/ε_0 affects the privacy cost and report in the bottom left of Fig. <ref>. We can observe that the privacy cost declines rapidly with the noise scale within a certain range, has a short rise in the middle, and then keeps smooth. We suspect the main reason is the calculation process of privacy protection metric with moments accountant. Thus, we could select a noise scale (, 2/ε_0=30) to provide a good trade-off between privacy protection and model accuracy.
To further study the effect of noisy aggregation, we conduct experiments on MNIST and FMNIST and report the results in the top right of Fig. <ref> where DGD+ improves DGD with Gaussian aggregation. It shows that the improvement of all students' accuracy starts rapid and gets smooth when increasing the privacy budget. Moreover, as expected, the accuracy of students trained with Gaussian aggregation is remarkably improved over Laplacian aggregation, suggesting that more advanced noisy aggregation mechanisms can be incorporated into our framework to facilitate performance.
§.§ Privacy-Preserving Analysis
We further study the privacy-preserving ability of the learned students. Towards this end, we conduct both theoretical analysis and practical analysis.
In theory, our approach trains students from VAE-reconstructed synthetic data in generative stream and noisy labels in discriminative stream, where the reconstructed synthetic data are achieved by inputting synthetic data into VAE and reconstructing from noisy latent codes. As discussed in <ref>, it contains two parts of privacy budgets and totally achieves (|𝒟̂_s|ε_0^2+ε_0√(-2|𝒟̂_s|logδ)+ε_1,δ)-differential privacy over |𝒟̂_s| queries for all δ∈ (0,1). In our experiments, generative privacy budget ε_1=0.01 is very small and can be ignored in the total privacy budget. Discriminative privacy budget dominates the total privacy budget, , having a much higher privacy budget of 5.80 under ε_0=1/20, δ=10^-5 over 400 queries.
For discriminative stream, we first use differential privacy with advanced composition <cit.> to track privacy loss, seeing the red curve in the bottom right of Fig. <ref>. To better track the privacy loss, we further use differential privacy with moment accountant <cit.> and add an advanced limit <cit.> to get a lower privacy budget. The results can be seen the blue curve in the bottom right of Fig. <ref>.
We can see that the privacy guarantee is satisfied in all metrics, , a privacy budget of 10.1 with advanced composition or 8.03 with moments accountant under 1000 queries, leading to an effective trade-off between privacy-preserving model learning and accuracy drop control.
In practice, a learning process of our approach can achieve several major models, including: 1) the baseline model (serving as the fixed discriminator) and the ensemble of teachers that are kept privately, 2) the data-free learned generator that can be delivered to provide valuable synthetic data for training more further models in a more privacy-preserving manner than using private or other generative data (as shown in Fig. <ref> and discussed above), and 3) the student that is delivered for privacy-preserving deployment.
We take the student learned on MNIST under (10.0,10^-5)-differential privacy as an example and study its privacy-preserving ability against three reconstruction attacks, including reconstruction with data-free learned generator <cit.>, model inversion attack with confidence information and basic countermeasures <cit.>, and adversarial model inversion attack with background knowledge alignment <cit.>. Some reconstructed results by these three attacks are shown in the left of Fig. <ref>, from the first to the third row, respectively. We can see that the reconstructed images are very different from the original images and hardly distinguished which number they are by human. Thus, the student can well protect data privacy whilst delivering a high accuracy of 97.4%. To further verify the privacy-preserving ability of our students, we investigate an especial case by conducting inversion attack <cit.> against binary classification models that are trained on a subset of MNIST containing only `0's and `1's with GS-WGAN, DataLens and our DGD. The results are shown in the right of Fig. <ref>. The reconstruction results of GS-WGAN and DataLens can be distinguished by human, while our model can provide better protection against reconstruction attacks. From these results, we can safely claim that our approach can provide effective privacy-preserving model learning and the learned models are suitable particularly for practical application on privacy-conscious scenarios.
§ CONCLUSION
Deep models trained on private data may pose the risk of privacy leakage. To facilitate model deployment, we proposed a dicriminative-generative distillation approach to learn privacy-preserving student networks. The approach takes dicriminative and generative models as bridge to distill knowledge from private data and transfer it to learn students in a semi-supervised manner. The supervised learning from noisy aggregation of multiple teachers can provide privacy guarantee, while the unsupervised learning from massive synthetic generated by a data-free learned generator can reduce accuracy drop. Extensive experiments and analysis were conducted to show the effectiveness of our approach. In the future, we will devise more advanced differential privacy mechanisms to improve the approach and explore the approach in more real-world applications like federated learning on medical images.
Acknowledgements. This work was partially supported by grants from the Beijing Natural Science Foundation (19L2040), National Key Research and Development Plan (2020AAA0140001), and National Natural Science Foundation of China (61772513).
IEEEtran
[
< g r a p h i c s >
]Shiming Ge (M'13-SM'15) is a professor with the Institute of Information Engineering, Chinese Academy of Sciences. Prior to that, he was a senior researcher and project manager in Shanda Innovations, a researcher in Samsung Electronics and Nokia Research Center. He received the B.S. and Ph.D degrees both in Electronic Engineering from the University of Science and Technology of China (USTC) in 2003 and 2008, respectively. His research mainly focuses on computer vision, data analysis, machine learning and AI security, especially trustworthy learning solutions towards scalable applications. He is a senior member of IEEE, CSIG and CCF.
[
< g r a p h i c s >
]Bochao Liu received his B.S. degree in Electronical Information Science and Technology from the School of Information Science and Engineering in Shandong University, China. He is now a Ph.D Candidate at the Institute of Information Engineering at Chinese Academy of Sciences and the School of Cyber Security at the University of Chinese Academy of Sciences, Beijing. His major research interests are private-privacy machine learning.
[
< g r a p h i c s >
]Pengju Wang is an Assistant Professor with the Institute of Information Engineering, Chinese Academy of Sciences. He received the B.S. degree from the School of Information Science and Engineering at Shandong University and M.S. degree from the School of Electronic Engineering at Beijing University of Posts and Telecommunications. His research interests include AI security and federated learning.
[
< g r a p h i c s >
]Yong Li is an Associate Professor with the Institute of Information Engineering, Chinese Academy of Sciences. He received the B.S. degree from the School of Computer Sciences at the Beijing Jiaotong University and Ph.D degree from the Institute of Computing Technology, Chinese Academy of Sciences. His research interests include security data analysis and the design of private machine learning methods and systems.
[
< g r a p h i c s >
]Dan Zeng (SM'21) received her Ph.D. degree in circuits and systems, and her B.S. degree in electronic science and technology, both from University of Science and Technology of China, Hefei. She is a full professor and the Dean of the Department of Communication Engineering at Shanghai University, directing the Computer Vision and Pattern Recognition Lab. Her main research interests include computer vision, multimedia analysis, and machine learning. She is serving as the Associate Editor of the IEEE Transactions on Multimedia and the IEEE Transactions on Circuits and Systems for Video Technology, the TC Member of IEEE MSA and Associate TC member of IEEE MMSP.
|
http://arxiv.org/abs/2409.02266v1 | 20240903195249 | LSTMSE-Net: Long Short Term Speech Enhancement Network for Audio-visual Speech Enhancement | [
"Arnav Jain",
"Jasmer Singh Sanjotra",
"Harshvardhan Choudhary",
"Krish Agrawal",
"Rupal Shah",
"Rohan Jha",
"M. Sajid",
"Amir Hussain",
"M. Tanveer"
] | cs.SD | [
"cs.SD",
"cs.LG",
"cs.MM",
"eess.AS"
] |
Lissajous dynamics of a quantum particle in a tilted two-dimensional discrete lattice
Tomasz Sowiński
September 9, 2024
=====================================================================================
§ ABSTRACT
In this paper, we propose long short term memory speech enhancement network (LSTMSE-Net), an audio-visual speech enhancement (AVSE) method. This innovative method leverages the complementary nature of visual and audio information to boost the quality of speech signals. Visual features are extracted with VisualFeatNet (VFN), and audio features are processed through an encoder and decoder. The system scales and concatenates visual and audio features, then processes them through a separator network for optimized speech enhancement. The architecture highlights advancements in leveraging multi-modal data and interpolation techniques for robust AVSE challenge systems. The performance of LSTMSE-Net surpasses that of the baseline model from the COG-MHEAR AVSE Challenge 2024 by a margin of 0.06 in scale-invariant signal-to-distortion ratio (SISDR), 0.03 in short-time objective intelligibility (STOI), and 1.32 in perceptual evaluation of speech quality (PESQ). The source code of the proposed LSTMSE-Net is available at <https://github.com/mtanveer1/AVSEC-3-Challenge>.
[1]These authors contributed equally to this work.
§ INTRODUCTION
Speech is key to how humans interact. Speech clarity and quality are critical for domains like video conferencing, telecommunications, voice assistants, hearing aids, etc. However, maintaining high-quality speech in adverse acoustic conditions—such as environments with background noise, reverberation, or poor audio quality—remains a significant challenge. Speech enhancement (SE) has become a pivotal area of study and development to solve these problems and enhance speech quality and intelligibility <cit.>. Deep learning approaches have been the driving force behind recent advances in SE. While deep learning-based SE techniques <cit.> have shown exceptional success by focusing mainly on audio signals, it is crucial to understand that adding visual information can greatly improve SE systems' performance in adverse sound conditions. <cit.>. For comprehensive insights into speech signal processing tasks using ensemble deep learning methods, readers are referred to <cit.>.
Time-frequency (TF) domain methods and time-domain methods are two general categories into which audio-only SE methods can be divided, depending on the type of input. Classical TF domain techniques often rely on amplitude spectrum features; however, studies shows that their effectiveness may be constrained if phase information is not taken into account <cit.>. Some methods that make use of complex-valued features have been introduced to get around this restriction, including complex spectral mapping (CSM) <cit.> and complex ratio masking (CRM) <cit.>. Real-valued neural networks are used in the implementation of many CRM and CSM techniques, whereas neural networks using complex values are used in other cases to handle complex input. Notable examples of complex-valued neural networks for SE tasks include deep complex convolution recurrent network (DCCRN) <cit.> and deep complex U-NET (DCUNET) <cit.>. In this study, we employ time-domain-based methods as well as real-valued neural networks to show their effectiveness in SE tasks.
The primary idea in the wake of audio-visual speech enhancement (AVSE) is to augment an audio-only SE system with visual input as supplemental data with the goal of improving SE performance. The advantage of using visual input to enhance SE system performance has been demonstrated in a number of earlier research <cit.>. Most preceding AVSE methods focused on processing audio in the TF domain <cit.>, however some research have explored time domain methods for audio-visual speech separation tasks <cit.>. Additionally, techniques such as self supervised learning (SSL) embedding are used to boost AVSE performance. Richard et al. <cit.> presented the SSL-AVSE technique, which combines auditory and visual cues. These combined audio-visual features are then analyzed by a Transformer-based SSL AV-HuBERT model to extract characteristics, which are then controlled by a BLSTM-based SE model. However, these models are too large to be scalable or deployable in real-life scenarios. Therefore, we focused on developing a smaller, simpler and a scalable model that maintains performance comparable to these larger models.
In this paper, we propose long short-term memory speech enhancement network (LSTMSE-Net) that exemplifies a sophisticated approach to enhancing speech signals through the integration of audio and visual information. LSTMSE-Net employs a dual-pronged feature extraction strategy, visual features are extracted using a VisualFeatNet comprising a 3D convolutional frontend and a ResNet trunk <cit.>, while audio features are processed using an audio encoder and audio decoder. A key innovation to the system is the fusion of these features to form a comprehensive representation. Visual features are interpolated using bi-linear methods to align with temporal dimensions in the audio domain. This fusion process, combined with advanced processing through a separator network featuring bi-directional LSTMs
<cit.>, underscores the model's capability to effectively enhance speech quality through comprehensive multi-modal integration. The study thus explores new frontiers in AVSE research, aiming to improve intelligibility and fidelity in challenging audio environments.
The evaluation metrics for the model include perceptual evaluation of speech quality (PESQ), short-time objective intelligibility (STOI), and scale-invariant signal-to-distortion ratio (SISDR), with model parameters totalling around 5.1Mwhich is significantly less than the baseline model (COG-MHEAR Challenge 2024) which is around 75M parameters. The initial model weights are randomized, and the average inference time is approximately 0.3 seconds per video.
In summary, we have developed a strong AVSE model, LSTMSE-Net, by employing deep learning modules such as neural networks, LSTMs, and convolutional neural netowrks (CNNs). When trained on the challenge dataset provided by the COG-MHEAR challenge 2024, our model obtains higher outcomes across all evaluation metrics despite being substantially smaller than the baseline model provided by the COG-MHEAR challenge 2024. Its decreased size also results in a shorter inference time when compared to the baseline model which takes approximately 0.95 seconds per video.
§ METHODOLOGY
§.§ Overview
This section delves into the intricacies of the proposed LSTMSE-Net architecture, which leverages a synergistic fusion of audio and visual features to enhance speech signals. We discuss and highlight its audio and visual feature extraction, integration, and noise separation mechanisms. This is achieved using the following primary components, which are discussed further - audio encoder, visual feature network (VFN), noise separator and audio decoder. The overall architecture of our LSTMSE-Net is depicted in Fig. <ref>(a).
§.§ Audio Encoder
An essential part of the AVSE system, the audio encoder module is in charge of gathering and evaluating audio features. The conv-1d architecture used in this module consists of a single convolutional layer with 256 output channels, a kernel size of 16, and a stride of 8. Robust audio features can be extracted using this setup. To add non-linearity, a rectified linear unit (ReLU) activation function is applied after the convolution step. Afterwards, the upsampled visual features and the encoded audio information are combined and passed into the noise separator.
§.§ Visual Feature Network
The VFN is a vital component of the AVSE system, tasked with extracting relevant visual features from input video frames. The VFN architecture comprises a frontend 3-dimesional (3D) convolutional layer, ResNet trunk <cit.> and fully connected layers. The 3D convolution layer processes the raw video frames, extracting relevant anatomical and visual features. The ResNet trunk comprises a series of residual blocks designed to capture spatial and temporal features from the video input. The extracted features' dimensionality is then reduced to 256 by adding a Fully Connected Layer, which simultaneously improves computational complexity and gets the visual features ready for integration with the audio features.
Bi-linear interpolation is used to upsample the encoded visual characteristics so they match the encoded audio features' temporal dimension. This is done to ensure proper synchronization of features from both modalities. Further, these are concatenated, as mentioned above, with the audio features to form a joint audio-visual feature representation. This is then passed through the separator to extract the relevant part of the audio signal.
§.§ Feature Extractor and Noise Separator
§.§.§ Overview and Motivation
The Separator module is a crucial component of the LSTMSE-Net, tasked with effectively integrating and processing the combined audio and visual features to isolate and enhance the speech signal. This module leverages Long Short Term Memory (LSTM) networks to capture temporal dependencies and relationships between the audio and visual inputs. The use of LSTM in the AVSE system is further motivated by the following reasons.
Sequential data: Audio and visual features are sequential in nature, with each frame or time step building upon the previous one. LSTM is well-suited to handle such sequential data. Speech signals exhibit long-term dependencies, with phonetic and contextual information spanning multiple time steps. LSTM’s ability to learn long-term dependencies enables it to capture these relationships effectively.
Contextual information: LSTM’s internal memory mechanism allows it to retain contextual information, enabling the system to make informed decisions about speech enhancement.
§.§.§ Core functionality and Multimodal ability
The functionality of the Separator block is based on a multi-modal fusion design. Through the integration of audio and visual inputs, the Separator Block optimizes speech enhancement by utilizing complimentary information from both modalities. The VFN records visual cues including lip movements, which offer important context for differentiating speech from background noise. The ability to identify the portion of audio that the speaker is saying is made easier by the temporal alignment of the visual and aural elements.
The separator block is made up of several separate units, each of which makes use of intra- and inter-LSTM layers, linear layers, and group normalization. We now elaborate on the information flow in a single unit.
Group normalization layers are used to normalize the combined features following the first feature extraction and concatenation. These normalization steps stabilize the learning process and provide consistent feature scaling, guaranteeing that auditory and visual input are initially given equal priority. The model can recognize complex correlations and patterns between the auditory and visual inputs due to the intra- and inter-LSTM layers. The inter-LSTM layers are intended for a global context, whilst the intra-LSTM layers concentrate on local feature extraction. Through residual connections, the original inputs are brought back to the output of these LSTM layers, aiding in the gradient flow during training and helping to preserve relevant features. More reliable speech enhancement results from the Separator Block's ability to learn the additive and interactive impacts of the audio-visual elements because of this residual design. Fig. <ref>(b) shows a single unit of the Separator block.
As highlighted above the proposed AVSE system employs a multi-modal fusion strategy, combining the strengths of both audio and visual modalities. The final output of the separator module is the mask, which contains only the relevant part of the original input audio features and removes all the background noise. The original input audio features are then multiplied by this mask in order to extract those that are relevant and suppress the ones that are not needed. This generates a clean and processed audio feature map.
§.§ Audio Decoder
The audio decoder, which is built upon the ConvTranspose1d <cit.> architecture, consists of a single transposed convolution layer with a kernel size of 16, stride of 8, and a single output channel. This design facilitates the transformation of the encoded audio feature map back into an enhanced audio signal. It is given the enhanced feature map as the input, and returns the enhanced audio signal, which is also the final model output.
§ EXPERIMENTS
In this section, we begin with a detailed description of the dataset. Next, we outline the experimental setup and the evaluation metrics used. Finally, we present and discuss the experimental results.
§.§ Dataset Description
The data used for training, testing and validation consists of films extracted from the LRS3 dataset <cit.>.
It contains 34524 scenes (a total of 113 hours and 17 minutes) from 605 speakers appointed for TED and TEDx talks. For the noise, speech interferers were selected from a pool of 405 contestant speakers and 7346 noise recordings across 15 different divisions.
The videos contained in the test set differ from those used in the training and validation datasets. The train set contains around 5090 videos or 51k unique words in vocabulary, whilst the validation and test sets have 4004 films or 17k words in its vocabulary and 412 videos, respectively.
The dataset has two types of interferers: speech of competing speakers, which are taken from the LRS3 dataset (competing speakers and target speakers does not have any overlap) and noise, which is derived from various datasets such as CEC1 <cit.>, which consists around 7 hours of noise, the DEMAND <cit.>, noise dataset includes multi-channel recordings of 18 soundscapes lasting more than 1 hour, MedleyDB dataset <cit.> comprises 122 songs that are royalty-free. Additionally, Deep Noise Supression challenge (DNS) dataset <cit.>, which was released in the previous version of the challenge, features sounds present in AudioSet, Freesound and DEMAND, and Environmental sound classification (ESC-50) dataset <cit.> which comprises 50 noise groups that fall into five categories: sounds of animals, landscapes and water, human non-verbal sounds, noises from the outside and within the home, and noises from cities. Additionally, data preparation scripts are given to us. The output of these scripts consists of the following: S00001_target.wav (target audio), S00001_silent.mp4 (video without audio), S00001_interferer.wav (interferer audio), and S00001_interferer.wav (the audio interferer).
§.§ Experimental Setup
We set up our training environment to get the best possible performance and use of the available resources. A rigorous training method that lasted 48 epochs and 211435 steps was applied to the model. By utilising GPU acceleration, each epoch took about twenty-two minutes to finish. This effective training length demonstrates how quickly and efficiently the model can handle big datasets.
With 146 GB of shared RAM, a single NVIDIA RTX A4500 GPU is used for all training and inference tasks. Our LSTMSE-Net model is effectively trained because of it's sturdy training configuration, which also guaranteed that the model could withstand the high computational demands necessary for high-quality audio-visual speech enhancement.
§.§ Evaluation Metrics
The LSTMSE-Net model was subjected to a thorough evaluation using multiple standard metrics, including scale-invariant signal-to-distortion ratio (SISDR), short-time objective intelligibility (STOI), and perceptual evaluation of speech quality (PESQ). A comprehensive and multifaceted evaluation is ensured by the distinct insights that each of these measures offers into various aspects of the quality of voice enhancement.
§.§.§ PESQ
A standardised metric called PESQ compares the enhanced speech signal to a clean reference signal in order to evaluate the quality of the speech. With values ranging between -0.5 to 4.5, larger scores denote better perceptual quality.
§.§.§ STOI
STOI (short-time objective intelligibility) is a metric used to assess how clear and understandable speech is, particularly in environments with background noise. It measures the similarity between the clean and improved speech signals' temporal envelopes, producing a score between 0 and 1. Improved comprehensibility is correlated with higher scores.
§.§.§ SISDR
By calculating the amount of distortion brought about by the enhancement process, SISDR is a commonly used metric to assess the quality of speech enhancement. Higher SISDR values are indicative of less distortion and improved speech signal quality, making them a crucial indicator for assessing how well our model performs in maintaining the original speech features.
We guarantee a thorough and comprehensive examination of the LSTMSE-Net model by utilising these three complimentary assessment metrics. STOI gauges intelligibility, PESQ assesses perceptual quality, and SISDR concentrates on distortion and fidelity. When taken as a whole, these measurements offer a thorough insight of the model's performance, showcasing its advantages and pinpointing areas that might use improvement. Our dedication to creating a high-performance speech enhancement system that excels in a number of crucial areas of audio quality is demonstrated by this multifaceted evaluation technique.
§.§ Evaluation Results
Three types of speeches were included in our evaluation. First, we used the noisy speech provided in the challenge testing dataset, which also served as the audio requiring further enhancement using various AVSE models. Second, we generate the improved speech by applying our LSTMSE-Net model to enhance the noisy speech. Finally, we produce the improved speech using the COG-MHEAR AVSE Challenge 2024 baseline model to enhance the same noisy speech. We evaluated them using the PESQ, STOI, and SISDR, which are the standard evaluation metrics. The table <ref> displays the final scores of the models on the evaluation metrics. Compared to noisy speech, the AVSE baseline model produced notably better quality (PESQ) and higher intelligibility (STOI). Furthermore, in PESQ, STOI, and SISDR, LSTMSE-Net outperformed the baseline model by a margin of 0.06, 0.03, and 1.32, respectively. All evaluation criteria showed that LSMTSE-Net performed better than the baseline as well as the noisy speech, which is strong proof of the efficacy of our model.
The table <ref> displays the final inference time of the models on the testing dataset. Compared to the baseline model, which takes an average of 0.95 secs per video to enhance the audio, LSTMSE-Net only takes 0.3 seconds per video on average to enhance the audio in them. This significant reduction in processing time underscores the efficiency of LSTMSE-Net.
The superior efficiency and efficacy of the proposed LSTMSE-Net not only reduces the computational load but also enables real-time processing, making it highly suitable for applications requiring low latency. Moreover, the smaller model size enhances scalability, allowing the deployment of LSTMSE-Net on a wider range of devices, including those with limited computational resources. This makes the proposed model an excellent choice for both high-performance systems and resource-constrained environments, demonstrating its versatility and practical applicability.
§ CONCLUSION AND FUTURE WORK
This research presents LSTMSE-Net, an advanced AVSE architecture that improves speech quality by fusing audio signals with visual information from lip movements. The LSTMSE-Net architecture consists of an audio decoder, a visual encoder, a separator, and an audio encoder. Each of these components is essential to the processing and refinement of the input signals in order to generate enhanced speech that is high-quality.
AVSE-Net exhibits the capacity to efficiently capture and leverage both local and global audio-visual interdependence. With the use of advanced deep learning methods like convolutions and long short term memory networks, LSTMSE-Net improves voice enhancement significantly.
Experimental studies on the benchmark dataset, i.e., COG-MHEAR LRS3 dataset, confirm LSTMSE-Net's superior performance. LSTMSE-Net performs much better than baseline models on the COG-MHEAR LRS3 dataset, demonstrating its effectiveness in combining visual and aural characteristics for improved speech quality. To sum up, LSTMSE-Net is a major breakthrough in audio-visual speech improvement, utilising the complementary qualities of both auditory and visual input to provide better voice quality. This work establishes a new benchmark in the field by offering a scalable and efficient solution for speech improvement.
For our future work, we have the following plans:
* We aim to extend our model to incorporate causality in its architecture, enabling real-time deployment. This enhancement will ensure that the model relies solely on past and current information for predictions.
* We plan to propose an enhanced version of LSTMSE-Net that incorporates attention mechanisms and advanced feature fusion techniques to further refine the integration of visual and audio features. Our goal is to achieve superior performance across various AVSE benchmarks.
* Additionally, we will conduct a comprehensive comparative analysis of LSTMSE-Net and other state-of-the-art AVSE variants. This analysis will focus on their performance in real-world noisy environments to identify strengths and areas for improvement.
§ ACKNOWLEDGEMENT
The authors are grateful to the anonymous reviewers for their invaluable comments and suggestions. This project is supported by the Indian government’s Science and Engineering Research Board (SERB) through the Mathematical Research Impact-Centric Support (MATRICS) scheme under grant MTR/2021/000787. Prof Hussain acknowledges the support of the UK Engineering and Physical Sciences Research Council (EPSRC) Grants Ref. EP/T021063/1 (COG-MHEAR) and EP/T024917/1 (NATGEN). The work
of M. Sajid is supported by the Council of Scientific and Industrial Research (CSIR), New Delhi for providing fellowship under the under Grant
09/1022(13847)/2022-EMR-I.
IEEEtran
|
http://arxiv.org/abs/2409.02077v1 | 20240903172600 | FastEnsemble: A new scalable ensemble clustering method | [
"Yasamin Tabatabaee",
"Eleanor Wedell",
"Minhyuk Park",
"Tandy Warnow"
] | cs.SI | [
"cs.SI"
] |
Fast Ensemble Clustering
Tabatabaee et al.
Siebel School of Computing and Data Science, University of Illinois Urbana-Champaign, Urbana IL 61801
{warnow}@illinois.edu
FastEnsemble: A new scalable ensemble clustering method
Yasamin Tabatabaee10000-0002-7811-5989
Eleanor Wedell10000-0002-7911-9156
Minhyuk Park10000-0002-8676-7565 Tandy Warnow10000-0001-7717-3514
September 9, 2024
=================================================================================================================================================
§ ABSTRACT
Many community detection algorithms are stochastic in nature, and their output can vary based on different input parameters and random seeds.
Consensus clustering methods, such as FastConsensus and ECG, combine clusterings from multiple runs of the same clustering algorithm, in order to improve stability and accuracy.
In this study we present a new consensus clustering method, , and show that it provides advantages over both FastConsensus and ECG.
Furthermore, is designed for use with any clustering method, and
we show results using with Leiden optimizing modularity or the Constant Potts model.
is available in Github at
<https://github.com/ytabatabaee/fast-ensemble>
<cit.>.
§ INTRODUCTION
Community detection, also known as clustering, is the problem of dividing the nodes of a given network into disjoint subsets so that each subset displays features of a community, such as increased edge density, separability from the rest of the network, strong internal edge connectivity, etc.
Several community detection methods have been developed in the past few decades <cit.>, some of which employ randomness in different ways to produce a desirable community.
For example, some methods are based on modularity <cit.> or the constant Potts model (CPM) <cit.>, which are NP-hard optimization problems.
The popular software Louvain <cit.> and Leiden <cit.>
provide effective heuristics for modularity, and Leiden also provides an effective heuristic for optimizing under the CPM criterion.
One of the difficulties in using clustering methods that optimize modularity or CPM is the variability in the outputs, as searches for NP-hard problems based on different starting points, random seeds, or tie-breaking rules often produce different results <cit.>.
To address these challenges, consensus (or ensemble) clustering approaches have been proposed that use different algorithmic strategies to combine information from different runs in order to extract a reliable clustering, and
these consensus approaches can lead to more robust and stable partitions, and improve the accuracy of the output clustering <cit.>.
A class of consensus clustering methods, as introduced in <cit.>, take a network G as input and run a clustering algorithm (such as Leiden with different random seeds) on it np times to get np different partitions. Next, they summarize the information from these partitions (i.e., nodes that are frequently co-clustered together) into a co-classification matrix and create a new weighted network G^' from this matrix. The new network is again given to the clustering algorithm np times to produce np partitions. This procedure is continued until the network G^' converges to a stationary network after multiple iterations. Various flavors of this consensus approach have been proposed in the literature <cit.>.
Scalability is an issue for these approaches, as building the co-classification matrix is itself computationally intensive.
FastConsensus <cit.> is a recent method that tries to address the scalability issue by using a sampling technique, where the co-classification matrix is only computed for a subset of node pairs.
Another recent and promising consensus method is Ensemble Clustering for Graphs (ECG) <cit.>, but it uses a somewhat simpler technique than FastConsensus, which suggests that it should be even faster.
We report on a new consensus clustering method, .
The algorithmic technique in is very simple, omitting much of the sophisticated technicalities of both ECG and FastConsensus so that it can scale to very large networks.
can be run with any given clustering method, and can even be used to combine the outputs of different clustering methods.
In this study we evaluate for use in modularity-optimization on synthetic networks with ground truth communities with up to ∼ 3.8M nodes.
We compare to FastConsensus and ECG with respect to accuracy and runtime. Both ECG and are fast enough to run on the very large networks we analyze, but FastConsensus is slower. We also find that produces improved accuracy compared to FastConsensus and ECG on networks that are challenging to cluster accurately and matches or comes close to the same accuracy on the easier networks.
The rest of this paper is organized as follows. We describe in Section <ref>.
In Section <ref>, we describe the performance study, including datasets, methods, and evaluation procedure. Section <ref> includes the details of the experiments and their results. We discuss the trends in Section <ref> and conclude in Section <ref> with a summary and directions for future work.
Supplementary Materials are available online at <cit.>.
§ FAST ENSEMBLE CLUSTERING
We have developed a basic algorithmic structure that can be used with one or a combination of clustering paradigms, and implemented it for use with Leiden optimizing CPM, Leiden optimizing modularity, and Louvain.
While the code is. under active development and new features are being added, in
this study we only explore two variants of this approach, which we now describe.
In its simplest form, uses three main parameters: the clustering method, the number of partitions np, and the threshold t.
Given an input network N, uses
the specified clustering method to generate np partitions of N, and then builds a new network on the same node and edge set but with the edges weighted by the fraction of the partitions in which the endpoints are in the same cluster. If a given edge has weight less than t, then the edge is removed from the network; hence the new network can have fewer edges than the original network.
The new weighted network is then clustered just once more using the selected clustering method.
Increasing the number np of partitions may improve accuracy and stability, but at a computational cost; therefore, for very large networks, np defaults to 10.
In Experiment 1 we use synthetic networks to determine a default setting for the parameter t.
However, setting t=1 produces a special case that we refer to as the Strict Consensus Clustering (SC).
§ PERFORMANCE STUDY
Due to space limitations, we provide a brief description of the performance study; see the supplementary materials document for full details.
§.§ Networks
We used a selected set of synthetic networks, some available from prior studies, and some generated for this study.
Table <ref> provides a summary of empirical statistics, including network size and mixing parameters <cit.> for these networks.
Note that networks that have mixing parameters of 0.5 or larger are considered “complex" and challenging to cluster
while networks with much smaller mixing parameters are generally easy to cluster <cit.>.
Training Experiments.
For the training experiment, we generated LFR networks using parameters taken from similar networks used in <cit.>, but with the exponent for the cluster size distribution modified to better fit real-world networks (see Supplementary Materials for further discussion).
Each of these synthetic networks has 10,000 nodes with calculated mixing parameter values that vary between 0.196-0.978 (note that the model mixing parameters, which are used to generate the networks, are drawn from 0.1, 0.2, …, 1.0, but the resultant mixing parameters are different).
Testing Experiments.
We use 27 LFR <cit.> networks from <cit.>; these were generated using parameters from Leiden-mod and Leiden-CPM clusterings of five real-world networks: cit_hepph, the Curated Exosome Network (CEN), Open Citations (OC), wiki_topcats, and cit_patents.
Two of these LFR networks had a substantial percentage of ground truth clusters that were disconnected, and were not included in the experiment in <cit.>, and are also not used in this study, and LFR failed to return a network for one model condition.
We also use ring-of-cliques networks <cit.>, Erdős-Rényi graphs <cit.>, and graphs formed by merging Erdős-Rényi graphs with LFR networks.
§.§ Methods
We include , ECG, FastConsensus, and Leiden, each for modularity optimization (“Leiden-mod").
In the final experiment we examine and Leiden for CPM-optimization.
§.§ Evaluation criteria
We report accuracy on networks with known ground-truth using Normalized Mutual Information (NMI)
and Adjusted Rand Index (ARI) as implemented by the Scikit-learn <cit.> library.
We also report the F1-score.
We also compute false negative and false positive error rates, defined as follows.
By considering true and estimated clusterings each as an equivalence relation, and thus defined by a set of pairs (where (x,y) is in the relation if and only if nodes x and y are in the same cluster), we can also define false negatives (pairs that are in the true clustering but missing in the estimated clustering), false positives (pairs that are in the estimated clustering but not in the true clustering), true positives (pairs that are in both the true and estimated clustering), and true negatives (pairs that are in neither clustering).
Using these, we report
the False Negative Rate (FNR) and False Positive Rate (FPR), given by fn/fn+tp and fp/fp+tn respectively, where
fn denotes the number of false negatives, fp denotes the number of false positives, and tn denotes the number of true negatives.
§.§ Experiments
We perform five experiments, four comparing pipelines that are based on modularity (i.e., Leiden-mod, used with Leiden-mod, FastConsensus, and ECG) and a final experiment that focuses on and includes CPM-optimization.
In each case, synthetic networks were used and accuracy was evaluated in comparison to the ground truth clusterings.
Except for Experiment 5, all analyses were given four hours of runtime and 64Gb of memory on the University of Illinois Campus Cluster; failures to complete within that time limit were noted.
* Experiment 1: We set the default for the threshold parameter in
* Experiment 2: We evaluate modularity-based pipelines with respect to both accuracy and scalability on synthetic networks with
∼ 10K
to ∼ 3.8M nodes.
* Experiment 3: We evaluate clusterings on networks that are either entirely or partially Erdős-Rényi graphs.
* Experiment 4: We evaluate robustness to the resolution limit on ring-of-cliques networks with up to 100K nodes.
* Experiment 5: We evaluate the accuracy of on large synthetic LFR networks (up to ∼ 3.8M nodes)
from <cit.>.
Experiments 1, 2, and 5 use networks with a range of mixing parameters, Experiment 3 uses networks with moderate to high mixing parameters,
and Experiment 4 uses networks with extremely low mixing parameters (Table <ref>).
§ RESULTS
§.§ Experiment 1: Training experiment
In this first experiment we set the default value for the threshold parameter t, so that edges with support below t are removed from the network.
In Fig <ref> (left), we compare results for only three threshold values: t=0.2, 0.5, and 0.8. We see that overall the best accuracy across all the networks is obtained using t=0.8.
In Fig <ref> (right), we show results for just one of the networks with the mixing parameter 0.5, but allowing t = 0.1, 0.2, …, 0.9.
On this model condition, values for t between 0.7 and 0.9 produce the best accuracy.
Based on this experiment, we set t=0.8 as the default.
Note the impact of the mixing parameter: while accuracy is very high for networks with the lowest mixing parameter, it quickly drops as the mixing parameter increases.
This reflects the discussion in <cit.>.
§.§ Experiment 2: Accuracy and scalability of clustering pipelines
We use two collections of synthetic networks in this experiment: the LFR networks based on modularity clusterings of large real-world networks from <cit.>
(which go up to ∼ 3.8M nodes) and the training datasets used in Experiment 1.
Each analysis was limited to 4 hours and 64Gb of memory.
Results on LFR networks from <cit.>: For all the methods, the only network they completed on within four hours was cit_hepph, the smallest network with only ∼ 34K nodes, and
all three methods had nearly perfect ARI and NMI scores (Supplementary Materials).
We then allowed all three methods to run for up to 48 hours on the four remaining networks.
completed on all the remaining networks, using between 7 and 28 hours.
FastConsensus completed only on one of these networks (using 14.5 hrs),
where it had excellent accuracy that was somewhat better than .
ECG completed on all networks, using from 6 hrs to 36 hrs on each.
Thus, and ECG were both faster than FastConsensus.
ECG was less accurate than on three networks and more accurate on one network, where both ECG and had very high ARI/NMI scores, indicating that the network was relatively easy to cluster (Supplementary
Materials).
Results on the training networks: Fig <ref> shows that accuracy decreases for all methods as the model mixing parameter increases.
The method with the best accuracy for the two smallest model mixing parameters (0.1 and 0.2) is ECG, but then has the best accuracy for the larger model mixing parameters.
Both ECG and consistently match or improve on Leiden-mod. FastConsensus improves on Leiden-mod for the large mixing parameter values but is less accurate than Leiden-mod for the smaller mixing parameters.
§.§ Experiment 3: Detecting unclusterable portions of the network
Erdős-Rényi graphs do not have valid communities, and so all nodes are best represented as being singleton clusters. We use Erdős-Rényi graphs to evaluate to what extent clustering pipelines are able to reject community structure
by
returning no or very few non-singleton clusters.
We also create networks that are combinations of Erdős-Rényi graphs and LFR networks, and see to what extent the clustering pipelines we examine produce communities that are limited to nodes that are in the LFR subnetwork.
We evaluate these questions by examining both the cluster size distribution as well as by examining clustering accuracy.
For Erdős-Rényi graphs with very low values for density p (Fig <ref> (top)), all pipelines tested have good accuracy, returning mostly singletons and clusters of size 2 or 3. However, as the density p increases, both ECG and Leiden-mod return clusters that increase in size, indicating that they are finding community structure in these random networks.
At the two largest tested densities, FastConsensus also produces large clusters, while Strict Consensus and continue to return mainly small clusters.
We see somewhat different trends on networks that are combinations of Erdős-Rényi graphs and LFR networks (Fig <ref> (bottom)). Note that in this setting, FastConsensus and ECG have much better accuracy than on Erdős-Rényi graphs, and only Leiden-mod has really poor accuracy.
The two best methods are the two variants of Strict Consensus, with in third place.
§.§ Experiment 4: The Resolution Limit
The resolution limit for modularity was established in
<cit.>, which shows that under some conditions, an optimal modularity clustering will fail to return what are the “obvious" communities if they are too small.
Furthermore, <cit.> provide as an example the family of ring-of-cliques networks, which are parameterized by the clique size k and the number n of cliques; in a ring-of-cliques network, the cliques are placed in a ring, and each clique is attached to the cliques on each side by a single edge.
They establish that when
n ≥ k(k-1)+2, then the optimal modularity clustering will put two or more cliques together into a single cluster, instead of the obviously preferred clustering that returns each clique as a separate cluster.
Here we examine whether consensus clustering methods can address this vulnerability of modularity-based clustering from an empirical perspective, using ring-of-clique networks where each clique is of size k=10 but the number n of cliques is allowed to vary.
According to the previous paragraph, when n ≥ 91 then an optimal modularity clustering will group two or more of the cliques together.
Hence, we examine values of n that are both smaller and larger than n=91 in this experiment.
We examine Leiden-mod, FastConsensus, ECG, , and two ways of running the Strict Consensus that vary in terms of the number of partitions (np) on these networks.
As seen in Fig <ref>,
for n=90 clusters, all the methods produce clusterings where each clique is a cluster, as desired.
However, as the number of clusters increases, but not their sizes, then Leiden-mod starts merging cliques together, as predicted by the theory from <cit.>.
We also see that all the consensus clustering methods (e.g., FastConsensus, ECG, , and Strict Consensus) reduce the tendency to merge cliques into clusters, and that the Strict Consensus variants, especially with np=50, have the best accuracy.
Interestingly, has poor accuracy, especially for the large numbers of clusters, where it is nearly as poor as Leiden-mod.
Fig <ref> also shows the FNR and FPR rates for these methods.
Note that all the methods return essentially zero FNR, indicating that no clique in the ring-of-cliques network is ever split apart.
On the other hand, the methods differ in terms of FPR, with Leiden-mod having high FPR rates except for n=90. Again, is almost as poor as Leiden-mod,
while the other consensus methods have much lower FPR.
The Supplementary Materials shows results for the same data but for clusterings based on CPM-optimization.
Note that, in contrast to the theory for modularity, <cit.> established that for every setting of the resolution parameter r, there will be a value N so that every optimal CPM(r) clustering of a ring-of-cliques network with n ≥ N cliques of size k will return the individual cliques as clusters.
However, our experimental results show that for large enough numbers of cliques of size 10, Leiden-CPM groups cliques together into clusters.
This vulnerability occurs for all of the small resolution values, but disappears when r ≥ 0.01.
Fortunately, using the Strict Consensus with Leiden-CPM fixes this issue, and returns just the cliques as the clusters.
§.§ Experiment 5: Results on very large networks
In this experiment we
explore and Leiden, using both modularity and CPM-optimization, on 27 large synthetic networks based on clustered real-world networks (Materials and Methods) that range up to ∼ 3.8M nodes.
We allow up to 48 hrs runtime and provide 64Gb memory.
is nearly always at least as accurate as Leiden for all 27 model conditions, and tends to be more accurate when used with Leiden-mod or with Leiden-CPM with small resolution values (Fig. <ref>).
Furthermore, the improvement in accuracy is sometimes very large.
Finally, all analyses using CPM-optimization completed in under 3 hours, and only one of the modularity-based analyses required more than 24 hrs (Supplementary Materials).
Thus, this experiment establishes that can run on very large networks, up to 3.8 million nodes, and provides an improvement in accuracy over both Leiden-mod and Leiden-CPM.
§ DISCUSSION
This study reported results on synthetic networks for three consensus clustering methods: ECG, FastConsensus, and FastEnsemble.
When optimizing for modularity, each of these reliably produced clusterings that were more accurate than Leiden-mod clusterings under many conditions, and did not reduce accuracy.
However, while both ECG and could run on the very large networks, FastConsensus was slower and failed to complete within the allowed 48 hr time period on nearly all networks with 1M or more nodes.
The consensus clustering methods showed very different performance in terms of accuracy.
While there were certainly some model conditions where there were often very little differences in accuracy, there were many conditions where provided the best accuracy, and some conditions where ECG or FastConsensus provided the best accuracy.
Experiments 3 and 4 focused on networks that are generally atypical of real-world networks and Experiment 5 only examined , and so we restrict this discussion to Experiments 1 and 2.
Experiment 1 networks have a range of mixing parameters (and hence clustering difficulty), and this experiment suggests that has an advantage when the mixing parameter is not too small (and conversely, ECG has an advantage for the smallest mixing parameters).
Experiment 2 includes these training networks and also modularity-based networks from <cit.>, which have very low mixing parameters (Supplementary Materials).
For the Experiment 2 networks, when FastConsensus or ECG is more accurate than , it is only by small amount, and the mixing parameter is small (Supplementary Materials).
Thus, these two experiments suggest that when ECG or FastConsensus is more accurate than , then the network is generally very easy to cluster, and the top methods have very high accuracy; moreover, these networks have low mixing parameters.
Here we note that CPM-based clusterings of real-world networks reported in <cit.> typically have moderate to high mixing parameters (Supplementary Materials), suggesting that accuracy on networks with moderate or higher mixing parameters is the more important criterion.
Finally, we only tested the Strict Consensus variant of in Experiments 3 and 4, which address performance on graphs that are largely unclusterable or that present the resolution limit challenge, respectively.
As we expected, the Strict Consensus provides excellent results for those problems.
Thus, the tested consensus methods have different strengths, with best suited to networks that have at least moderate mixing parameters, ECG and FastConsensus better suited to networks with small mixing parameters, and the Strict Consensus suited to the case where the goal is to avoid false discovery.
However, we also observed that FastConsensus was much slower than both ECG and , especially on the networks with more than 1,000,000 nodes.
§ CONCLUSIONS
Our study showed that can provide very good results, matching or improving on both ECG and FastConsensus, two other consensus methods that use more sophisticated techniques, on many networks that are generally difficult to cluster, such as those with moderate to high mixing parameters.
However, ECG and FastConsensus sometimes provide better results than for networks with low mixing parameters, so that the three consensus clustering methods each have contexts where they have an advantage.
We also found that ECG and are both faster and more scalable than FastConsensus.
Finally, we showed that , used with Leiden-CPM provided improved accuracy compared to Leiden-CPM, but Leiden-CPM is not enabled for use within ECG and FastConsensus.
In this initial study, we did not evaluate the feature in the code that allows a set of clustering algorithms, each with a weight, to be combined;
future work should investigate whether these additional features lead to improvements in accuracy.
A better understanding of scalability requires real-world networks, such as the Open Citations network with ∼ 13M nodes <cit.>, and so this is an obvious next step.
Our study was primarily performed using synthetic networks generated using LFR, but other simulators, including ABCD <cit.> and Stochastic Block Models, could be used; see <cit.> for review.
New approaches to consensus clustering have been developed, some of which have also been shown to be scalable to large networks; an example is the recent method in arXiv by Hussain et al. <cit.>;
future work will need to compare to these developments.
Data and Code Availability. The code and scripts used in this study are available at <https://github.com/ytabatabaee/fast-ensemble>. The data are available at <https://github.com/ytabatabaee/ensemble-clustering-data>.
spmpsci
|
http://arxiv.org/abs/2409.03144v1 | 20240905004119 | Geometrical Nonlinear Hall Effect Induced by Lorentz Force | [
"Junjie Yao",
"Yizhou Liu",
"Wenhui Duan"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
] |
State Key Laboratory of Low Dimensional Quantum Physics and Department of Physics, Tsinghua University, Beijing, 100084, China
[email protected]
School of Physics Science and Engineering, Tongji University, Shanghai 200092, China
Shanghai Key Laboratory of Special Artificial Microstructure Materials and Technology, Tongji University, Shanghai 200092, China
State Key Laboratory of Low Dimensional Quantum Physics and Department of Physics, Tsinghua University, Beijing, 100084, China
Collaborative Innovation Center of Quantum Matter, Beijing 100084, China
Institute for Advanced Study, Tsinghua University, Beijing 100084, China
§ ABSTRACT
The recently discovered nonlinear Hall (NLH) effect arises either without external magnetic field (type-I) or with an in-plane magnetic field (type-II). In this work we propose a new type of geometrical nonlinear Hall effect with an out-of-plane magnetic field (type-III) induced by the combination of Lorentz force and anomalous electronic velocity. The type-III NLH effect is proportional to the more refined structures of Bloch wave functions, i.e., the dipole moment of square of Berry curvature, thus becoming prominent near the band crossings or anticrossings. Our effective model analysis and first-principles calculations show that gate-tuned MnBi_2Te_4 thin film under uniaxial strain is an ideal platform to observe this effect. Especially, giant unidirectional magnetoresistance can occur in this material, based on which an efficient electrical transistor device prototype can be built. Finally a symmetry analysis indicates that type-III NLH effect has unique symmetry properties stemming from Berry curvature square dipole, which is different from other previously reported NLH effects and can exist in a wider class of magnetic crystals. Our study offers new paradigms for nonlinear electronics.
Geometrical Nonlinear Hall Effect Induced by Lorentz Force
Wenhui Duan
September 9, 2024
==========================================================
§ INTRODUCTION
The Hall effects, as one of the most long-history and yet paradigmatic phenomena in condensed matter physics, have been extensively studied due to their underlying rich physics <cit.>. Interestingly, the Hall effects can have two very different physical origins: they can be induced by the Lorentz force in the external magnetic field, or result from the electronic band geometry and topology <cit.>. Prominent examples of the latter include the quantum Hall effect in a strong magnetic field <cit.> and its anomalous version without a magnetic field <cit.>, which have provided valuable insights into the nontrivial electronic structures related to the momentum space textures (i.e., curvature and metric) of Bloch wave functions, and have triggered exotic applications in many other systems <cit.>. The former, at lower magnetic fields, is classical and does not rely on Bloch wave function textures <cit.>.
On the other hand, the recently discovered nonlinear anomalous Hall (NLAH) effect has stimulated the interest of nonlinear electronic transport and optoelectronic studies. This effect connects the nonlinear response coefficients with some geometrical or topological properties of Bloch wave functions, and has promising applications, including second harmonic generation, radio-frequency ac-dc rectification, and terahertz detection, among others <cit.>. The NLAH effect refers to the Hall current j^H∝ E^2 in response to the driving electrical field E without external magnetic field as shown in Fig. <ref> (a), which we dub type-I nonlinear Hall (NLH) effect and is related to the dipole moment of Berry curvature or quantum metric. More recently, another type of nonlinear planar Hall effect was proposed with j^H∝ E^2 B where the Hall current j^H, driving electrical field E, and magnetic field B lie within the same plane <cit.> as shown in Fig. <ref>(b), which we call type-II NLH effect and corresponds to the Berry connection polarizability. In the standard Hall geometry, as shown in Fig. <ref> (c), the ordinary linear Hall effect induced by Lorentz force usually dominates, with the Hall current j^H∝τ^2 EB (where τ is the relaxation time), and resultant ordinary Hall conductivity depends only on the carrier density, independent of the electronic wave function properties <cit.> (see also Appendix <ref>). However, the nonlinear properties of the Lorentz force-induced Hall effect have not been explored so far. This raises a straightforward question: does a Lorentz force-induced NLH effect exist, and could this NLH effect further connect the measurable quantities with any refined intrinsic properties of Bloch electrons, beyond just the carrier density like the ordinary Hall effect?
In this paper we show that the Lorentz force together with the Berry-curvature-induced anomalous velocity <cit.> can lead to a new type of geometrical NLH effect (type-III) with j^H∝τ E^2 B, which is in lower power of τ compared to the ordinary Hall effect and is related to the more refined geometrical properties of Bloch wave functions, offering a straightforward method to investigate this more intricate geometrical properties in materials. Different from the type-I NLH effect induced by either Berry curvature dipole (BCD) <cit.> or quantum metric dipole <cit.>, the proposed type-III NLH effect here results from the Berry curvature square dipole which dominates in topological materials with large Berry curvature and can exist in a broader class of magnetic point groups. Based on effective model analysis and density functional theory (DFT) calculations we find sizable type-III NLH conductivity in gate-tuned MnBi_2Te_4 double septuble layers (SLs) under moderate uniaxial strain. Moreover, we also find giant unidirectional magnetoresistance (UMR) effect in this material system without strain, which can be utilized to build an electronic transistor device prototype. Finally, a symmetry analysis is carried out to identify all magnetic point groups that allow the existence of our NLH and UMR effects. The unique symmetry properties of Berry curvature square dipole can help us find some systems that allow only type-III NLH while forbid type-I NLH effects mentioned above. Our findings pave the way for further investigation on this new type of nonlinear Hall effect and possible electronic device applications.
§ THEORY AND ANALYSIS
§.§ Theory of type-III NLH effect
To understand the origin of type-III NLH effect, we start from the well-known anomalous Hall current of solids <cit.>:
j^H = -e^2/ħ∫dk/(2π)^d E×Ω f,
where E is the external driving electrical field; Ω = i ⟨∇_k u_k | × | ∇_k u_k⟩ is the Berry curvature with | u_k⟩ being the periodic part of the Bloch wave function; f is the electronic occupation number and d refers to the spatial dimension. According to Eq. (<ref>), the linear anomalous Hall conductivity is totally determined by the electronic Berry curvature at thermodynamic equilibrium, which is finally determined by the unperturbed electronic wave function of occupied states. Thus in order to get nonlinear Hall effects, one must consider the field-induced perturbation effect on either the electronic wave function |u_k⟩ or the distribution function f. The NLH effects corresponding to the former one is an intrinsic material property which can be expressed in terms of Berry connection polarizability and quantum metric tensors <cit.>.
On the other hand the perturbation on the distribution function f can be determined by Boltzmann equation under relaxation time approximation:
- f - f_0/τ = k̇·∇_k f + ṙ·∇_r f + ∂_t f
with f_0 = [ e^(ε_k - ε_F)/k_BT + 1 ]^-1 being the equilibrium Fermi-Dirac distribution function. Combined with the semiclassical equation of motion of electrons <cit.>, the nonequilibrium contribution to the nonlinear Hall current can be derived as (see details in Appendix <ref>):
j̃^NLH = -e^2/ħ∫dk/(2π)^d E×Ωδ f,
δ f= f - f_0 = -τ( F_E ·ṽ + F_L ·v_A ) ∂ f_0/∂ε_k + O(τ^2).
Here F_E = -e E is the electrical force, and ṽ = v+v_M is the total group velocity, with v_M= ∇_k (-m_k·B)/ ħ, and v = ∇_kε_k/ħ the ordinary group velocity of electron. The ordinary group velocity part in the first term on the right-hand side of Eq. (<ref>) corresponds to the Berry curvature dipole contribution to the nonliner anomalous Hall effect <cit.>, and has been extensively studied in inversion-symmetry-breaking nonmagnetic materials <cit.>.
The second term, on the other hand, is the Lorentz-force-induced contribution which is seldom discussed before and will be the focus of this paper hereafter. Here, F_L = -e v×B refers to the Lorentz force which is perpendicular to the group velocity. Therefore, the Lorentz force does not induce energy shift in non-topological materials with zero Berry curvature. However, in topological materials the velocity of electrons acquires an anomalous term v_A = (-e/ħ) E×Ω induced by the Berry curvature effects <cit.>. The F_L ·v_A τ can be viewed as the energy shift induced by the Lorentz force and anomalous velocity which is generally nonzero and generates the Fermi surface shift shown in Fig. <ref> (d). It should be noted that the magnetic moment-related part in the first term on the right hand side of Eq. (<ref>) is of the same order O(τ E^2B) as the Lorentz force-induced one but exhibits relatively small magnitude <cit.>. Therefore, we will focus only on the Lorentz-force part.
Equations (<ref>) and (<ref>) demonstrate a new type of NLH effect induced by the combination of Lorentz force and the anomalous velocity. Given that F_L ∝B and v_A ∝E, the NLH current is proportional to τ E^2B and it can be expressed in a compact form as j^NLH_α = ∑_β,γ,λσ^NLH_αβγλ E_β E_γ B_λ with σ^NLH_αβγλ being the NLH conductivity whose expression is given by
σ^NLH_αβγλ = e^4τ/ħ^2∫dk/(2π)^d ∑_κϵ_αβκΩ_κ ( v_γΩ_λ - δ_γλ∑_μv_μΩ_μ ) ∂ f_0/∂ε_k,
where Greek indexes represents Cartesian coordinates, ϵ_αβκ is the Levi-Civita tensor, and δ_γλ is the Kronecker symbol. Recall that the group velocity v is odd under both inversion symmetry (𝒫) and time-reversal symmetry (𝒯) while the Berry curvature Ω is even under 𝒫 but odd under 𝒯 so σ^NLH_αβγλ exists only when 𝒫 and 𝒯 are simultaneously broken.
In addition to the intrinsic contribution from Berry curvature, disorder scatterings, such as skew scattering and side jumps, also play a role in linear Hall conductivity <cit.>. These scatterings are naturally expected to influence the nonlinear Hall (NLH) conductivity as well <cit.>. The impact of disorder scattering is highly dependent on carrier density: it is minimal when the Fermi energy is near the band edge but becomes significant as carrier density increases <cit.>. Conversely, Eq. (<ref>) demonstrates that the NLH conductivity proposed here is proportional to the square of the Berry curvature on the Fermi surface. This indicates its prominence in small-gap topological materials when the Fermi level is close to the band edges. The stark contrast between the contributions from Berry curvature and disorder scatterings helps to distinguish the mechanisms behind NLH conductivity through electronic doping effects.
§.§ Berry curvature square dipole
For 2D materials lying within xy plane, in order that the Lorentz force can take effects the magnetic field must have z component. This is reflected by fact that the NLH conductivity σ^NLH_αβγλ is nonzero only when λ=z, which is dramatically different from the nonlinear planar Hall effect <cit.>. Moreover, the second term on the right hand side of Eq. (<ref>) vanishes for 2D materials because v⊥Ω. Thus the NLH conductivity can be rewritten as:
σ^NLH_yxxz = -σ^NLH_xyxz = e^4τ/ħ^3∫dk/(2π)^2 (∂_k_xΩ^2_z) f_0,
σ^NLH_xyyz = -σ^NLH_yxyz = -e^4τ/ħ^3∫dk/(2π)^2 (∂_k_yΩ^2_z) f_0,
while other components are zero. The above expressions show that the NLH conductivity is proportional to the dipole moment of square of Berry curvature over occupied states, which is defined as
𝒟^(n)_α = ∫dk/(2π)^2 (∂_k_αΩ^n_z) f_0,
with n=2. Because 𝒟^(n)_α behaves like a vector within the 2D plane, the NLH conductivity shows a peculiar angular dependence: for applied electric field E = E(cosθ, sinθ, 0), the NLH conductivity is determined by σ^NLH(θ) = (e^4τ/ħ^3) ( 𝒟^(2)_x cosθ + 𝒟^(2)_y sinθ ). It should be noted that 𝒟^(2)_α is different from the ordinary BCD i.e. 𝒟^(1)_α first proposed in Ref. <cit.>. Under vertical mirror symmetry, 𝒟^(1)_α behaves like a pseudo vector which should be perpendicular to the mirror plane, while 𝒟^(2)_α behaves like a real vector and should be parallel to the mirror plane. Moreover, 𝒟^(2)_α is expected to become more prominent than 𝒟^(1)_α near the band crossings and anticrossings due to the large Berry curvature.
§ RESULTS AND DISCUSSIONS
§.§ Effective Model analysis
We take the 2D massive Dirac model as an example to demonstrate the behavior of NLH conductivity here. Without loss of generality we consider σ^NLH_yxxz here as an example. A nonzero σ^NLH_yxxz requires the breaking of mirror symmetry M_x. The minimal effective model can be expressed as,
H_k= ħ v_t k_x + ħ v_0(k_xσ_x + k_yσ_y) + mσ_z,
with v_0 being the group velocity, m being the mass term, and v_t being the tilting parameter to break 𝒫, 𝒯 and M_x symmetries. The band dispersion and Berry curvature are ε_sk = ħ v_t k_x + s √(ħ^2 v_0^2 k^2+ m^2), and Ω_z = -smħ^2 v^2_0/2(ħ^2 v^2_0 k^2 + m^2)^3/2 (s= ± represents the upper or lower band), respectively, with a band gap of 2|m|. For small tilting, we can derive that σ^NLH_yxxz(ε_F) = e^4τ m^2 v^2_0 v_t/8π|ε_F|^5( 1- 3m^2/|ε_F|^2) + O(v^2_t) with |ε_F| ≥ m to ensure nonzero density of states <cit.>. The maximum σ^NLH_yxxz occurs at the band edges |ε_F|=m, and the value is proportional to m^-3 which is prominent in small-gap materials.
Figure <ref> shows the numerical results for the effective model. For untilted model, the band resolved σ^NLH_yxxz exhibits opposite values between k and -k which gives vanishing σ^NLH_yxxz for any ε_F so finite tilting is necessary to get nonzero σ^NLH_yxxz [Figs. <ref>(a)-(b)]. The calculated σ^NLH_yxxz increases with increasing v_t and the maximum values occurs near the band edge [Fig. <ref>(c)], which is consistent with the analytical result. Figures <ref>(d)-(e) show the calculated maximum σ^NLH_yxxz with varying v_t and m favoring large tilting and small band gap.
§.§ Candidate Material
According to the effective model analysis, 𝒫- and 𝒯- breaking materials with small band gap (i.e., near the topological phase transition) favor large NLH conductivity. MnBi_2Te_4 is a recently discovered layered topological antiferromagnet which attracts extensive research of interest because of its unique axion dynamics and versitile topological phase transitions <cit.>. Remarkably, recent studies have shown the gate-tunable topological properties of MnBi_2Te_4 thin film <cit.> with multiple Dirac Fermions near the Fermi energy under suitable vertical electric gating field. Figure <ref>(a) shows the crystal structure and Brillouin zone of MnBi_2Te_4 double septuple layers (SLs). A vertical electric field E_⊥ breaks the 𝒫𝒯 symmetry thus giving rise to finite Berry curvature. Without the vertical electric field, MnBi_2Te_4 with double SLs is a topologically trivial insulator with spin degeneracy protected by 𝒫𝒯 [Fig. <ref>(b)]. Applying the vertical electric field will split the energy bands and reduce the band gap <cit.>
. As increasing E_⊥, the band gap closes at about a critical field E_⊥ = E_c = 0.022 V/Åwith a tilted Dirac cone at the Fermi level [Fig. <ref>(c)]. Due to the 3-fold rotational symmetry C_3z, there are three tilted Dirac cones whose tilting directions are related by C_3z. Therefore, the net NLH conductivity vanishes [Fig. <ref>(d)]. By applying an external uniaxial strain the C_3z is broken and a nonzero NLH conductivity arises. Figure <ref>(e) shows the calculated σ^NLH_yxxz as function of Fermi energy ε_F with a large peak which favors further experimental measurement.
§.§ Unidirectional magnetoresistance
In the presence of external magnetic field, in addition to the NLH effect discussed above, there is another intriguing transport phenomenon that the longitudinal conductivity σ changes with the direction of current, which is called electrical magnetrochiral anisotropy or unidirectional magnetoresistance (UMR) effect, and has the same order O(E^2B) with NLH. The UMR effect in gated MnBi_2Te_4 double SLs can be described by current- and magnetic-field- dependent conductivity σ_αβ(j,B) = σ^0_αβ( 1 + ∑_γ,λΛ_αβγλ j_γ B_λ) with σ^0_αβ being the ordinary Drude conductivity and Λ_αβγλ being an intrinsic material property. Based on semiclassical equation of motion, the UMR is dominated by the Berry curvature on the Fermi surface, which is confirmed by the analytical results in a 3D Weyl model <cit.>. Figure <ref>(a) shows the calculated Λ_xxxz under perpendicular electrical field E_⊥ = ± E_c. The calculated UMR shows sharp peaks of about 2×10^4 m A^-1 T^-1 around ε_F = 0 and its sign reverses with the direction of E_⊥, which means the magnitude of electrical conductivity changes dramatically for a sample under current j ≈ 1 μA/cm and B_z=1 T. Such a nonlinear conductivity can be used to realize a electrical transistor device prototype under a small gating field V_0 which induces E_⊥ = ± E_c [Figs. <ref>(b)-(c)]. Besides, we find that NLH effect can exhibit sizable signals up to moderate temperature, however, UMR effect survives only in the low-teperature regime <cit.>.
§.§ Symmetry analysis
Now we analyze the symmetry properties of NLH and UMR effects. The UMR generally requires inversion symmetry breaking and can exist in all non-centrosymmetric magnetic point groups. On the other hand, the NLH effect is more complicated. Type-III NLH effect has a different physical origin from the BCD induced NLAH and the intrinsic nonlinear Hall effect (INHE), both of which belong to the type-I NLH effect, thus has different symmetry requirements. By careful analysis, we give the list of magnetic point groups that allow the existence or absence of type-III NLH effect, along with that of BCD and INHE, as shown in Table <ref> which is obtained based on the magnetic tensor symmetry module implemented in Bilbao Crystallographic Server <cit.>. It should be noted that BCD is fully forbidden by the coexistence of a mirror symmetry (which forbids the diagonal parts of BCD) and a rotation symmetry (which forbids the off-diagonal parts of BCD), and the case is similar for INHE which origins from quantum metric dipole, while type-III NLH effect is related to the Berry curvature square dipole which can at least have nonzero diagonal elements in the presence of a mirror symmetry. Obviously, we can find some magnetic groups where our type-III NLH exists but BCD induced NLAH and INHE are forbidden (the first row of Table <ref>). There are also some magnetic groups that allow BCD but forbid type-III NLH effect, which all have time reversal symmetry (the fourth row of Table <ref>). Additionally, from Table <ref>, we can clearly see that the type-III NLH effect can exist in a wider range among all the magnetic point groups (69 of 122 magnetic point groups allow type-III NLH) than BCD and INHE (53 of 122 magnetic point groups allow BCD and INHE). This broader range may benefit further experimental investigation and verification.
§ SUMMARY
We propose a theory of a new type of nonlinear Hall effect, which stems from the combination of the Lorentz force and anomalous velocity. Based on first-principles calculations, we show MnBi_2Te_4 double SLs as an ideal candidate with significant NLH conductivity. Interestingly, a giant UMR effect is also predicted in this material system with full electrical tunability, which may be useful for developing a new generation of high-performance electrically switchable transistor without PN junction. Our symmetry analysis further shows that this NLH effect can exist in a wider range of magnetic point groups compared to previously reported mechanisms, which may facilitate further investigations into these effects. Our finding highlights the important role of the Lorentz force in exploring the more refined electronic structure and topology of materials, which may have been previously overlooked.
§ ACKNOWLEDGEMENTS
This work was supported by the Innovation Program for Quantum Science and Technology (Grant No. 2023ZD0300500), the Basic Science Center Project of NSFC (Grant No. 51788104), and the Beijing Advanced Innovation Center for Future Chip. Y.L. is sponsored by the Shanghai Pujiang Program (Grant No. 23PJ1413000), NSFC (Grant No. 12404279) and the Fundamental Research Funds for the Central Universities.
§ DERIVATION OF ORDINARY HALL (OH) CONDUCTIVITY
In this section we derive the expression of ordinary Hall (OH) conductivity for a nearly free electronic gas model that supports a parabolic band. For a nearly free electron gas (without Berry curvature or orbital magnetic moment), the semiclassical equation of motion is
ħk̇ = -eE -eṙ×B,
ṙ = v = 1/ħ∇_kε_k.
The Boltzmann transport equation is
( ∂_t + k̇·∇_k + ṙ·∇_r) f = - f - f_0/τ,
i.e.,
[ 1 + τ( ∂_t + k̇·∇_k + ṙ·∇_r) ] f = f_0.
Because τ is usually small, the above Eq. (<ref>) can be formally solved as
f = [ 1 + τ( ∂_t + k̇·∇_k + ṙ·∇_r) ]^-1 f_0
= ∑^+∞_l=0[ - τ( ∂_t + k̇·∇_k + ṙ·∇_r) ]^l f_0
= ∑^+∞_l=0 f_l,
where f_l ∝τ^l is the l-th order perturbation term for the distribution function. For the steady and uniform distribution, i.e., ∂_t f = ∇_r f =0, the expression of f_l can be simplified as
f_l = ( -τk̇·∇_k)^l f_0.
Now we can derive the expression of OH conductivity based on Eqs. (<ref>) and (<ref>). Substitute Eq. (<ref>) into (<ref>), we get the first order term as
f_1 = eτ/ħ ( E + v×B) ·∇_k f_0
= eτ/ħ ( E + v×B ) ·ħv∂ f_0/∂ε_k
= eτE·v∂ f_0/∂ε_k.
From Eq. (<ref>), we can see that Lorentz force F_L =- ev×B does not enter into the first-order term because it is perpendicular to the group velocity v. As a result, it does not induce a Fermi surface shift. The second order term is
f_2 = -τk̇·∇_k f_1
= eτ/ħ (E + v×B) ·∇_k[ e τE·v∂ f_0/∂ε_k]
= e^2τ^2/ħ ( E + v×B )
·[ ∇_k (v·E) ∂ f_0/∂ε_k + (E·v) ∂^2 f_0/∂ε^2_kħv].
We focus on the Lorentz-force induced terms in Eq. (<ref>) which will lead to the OH conductivity:
f_OH = -τ^2 e^2/ħ (v×B) ·∇_k (v·E) ∂ f_0/∂ε_k
=τ^2 e^2/ħ [ ∇_k (v·E) ×B] ·v∂ f_0/∂ε_k
= τ^2 e^2 v·[ 1/m^*E×B] ∂ f_0/∂ε_k,
where the dispersion relation of nearly free electron gas ε_k = ħ^2 k^2/2m^* and the inverse effective mass (1/m^*)_αβ = 1/ħ^2∂ε_k/∂ k_α k_β have been used. Thus the ordinary Hall current is expressed by
j^OH = -∫dk/(2π)^d ev f_OH
= - e^3τ^2/m^*∫dk/(2π)^d vv·( E×B) ∂ f_0/∂ϵ_k.
For isotropic 3D electron gas, the OH effect can be expressed as j^OH = σ^OH (E×B) with the ordinary Hall conductivity calculated as
σ^OH = -e^3τ^2/m^*∫dk/(2π)^3 v^2/3∂ f_0/∂ε_k
≈ e^3 τ^2/3m^* (2π)^3∫ dk ħ^2 k^2/(m^*)^2δ(ε_F - ħ^2 k^2/2m^*)
= 2e^3τ^2/3(m^*)^2 (2π)^3∫^+∞_0 dk 4π k^2 k^2δ(k_F - k)/2k_F
= 4π e^3 τ^2 /3(m^*)^2 (2π)^3 k^3_F,
where k_F = √(2m^* ε_F)/ħ is the Fermi wave vector, and ∂ f_0/∂ε_k≈ - δ(ε_F - ε_k) has been used in the above derivation. On the other hand, the carrier density is
n_F = ∫dk/(2π)^3 f_0 = ∫^k_F_0 dk 4π k^2/(2π)^3 = k^3_F/6π^2.
By substitute Eq. (<ref>) into (<ref>), we get the familiar expression of OH conductivity as
σ^OH = e^3τ^2n_F/(m^*)^2.
Two key points about the ordinary Hall effect:
* It is of second order in τ;
* It relies only on the band dispersion relation ε_k and is independent of the Bloch wave function |u_k⟩.
§ DERIVATION OF TYPE-III NONLINEAR HALL (NLH) EFFECT
In this section, we derive the expression of NLH effect based on semiclassical equation of motion of electrons including Berry curvature and orbital magnetic moments:
ħk̇ = -e E - e ṙ×B,
ṙ = 1/ħ∇_kε_M - k̇×Ω,
where ε_M = ε_k - m·B (m refers to the orbital magnetic moment), and Ω is Berry curvature. Equation (<ref>) can be solved as:
D k̇ = -e/ħE - e/ħṽ×B - e^2/ħ^2 (B·E) Ω,
D ṙ = ṽ + e/ħE×Ω + e/ħ (ṽ·Ω) B,
with ṽ = 1/ħ∇_kε_M being the modified group velocity, and D = ( 1 + e/ħΩ·B)^-1.
We expand the distribution function f in a power series of relaxation time τ as: f = f_0 + f_1 + f_2 + ⋯ with f_n ∝τ^n and f_0 = [ e^(ε_k - ε_F)/k_BT + 1 ]^-1 being the equilibrium distribution function. According to the Boltzmann equation, i.e., Eq. (<ref>) of the main text, we have
f_n = - τk̇·∇_k f_n-1,
for spatially uniform E and B. Substituting Eq. (<ref>) into Eq. (<ref>) we can derive the first-order term f_1 as:
f_1 = τ/D[ e/ħE + e/ħṽ×B + e^2/ħ^2 (B·E) Ω] ·∇_k f_0
= τ/D[ e/ħE + e/ħṽ×B + e^2/ħ^2 (B·E) Ω] · (∇_kε_M) ∂ f_0/∂ε_M
≈τ/D[ eE·ṽ + e ṽ×B·ṽ + e^2/ħ (B·E) (Ω·ṽ) ] ∂ f_0/∂ε_k
≈τ/D[ eE·ṽ + e^2/ħ (B·E) (Ω·v) ] ∂ f_0/∂ε_k,
where v = 1/ħ∇_kε_k is the group velocity. On the third line of Eq. (<ref>) we have used the approximation ∂ f_0/∂ε_M≈∂ f_0/∂ε_k and on the last line we have substituted some ṽ terms by v_k by omitting a O(B^2) term in f_1. For small magnetic field D ≈ 1 - e/ħΩ·B. Thus the expression of f_1 up to linear order of B can be simplified as
f_1 = τ[ eE·ṽ( 1 - e/ħΩ·B) + e^2/ħ (B·E) (Ω·v) ] ∂ f_0/∂ε_k
= τ[ eE·ṽ - e^2/ħ (E·v) (Ω·B) + e^2/ħ (B·E) (Ω·v) ] ∂ f_0/∂ε_k
= τ[ eE·ṽ - e^2/ħ (v×B) · (E×Ω) ] ∂ f_0/∂ε_k
= -τ( F_E ·ṽ + F_L ·v_A ) ∂ f_0/∂ε_k,
with F_E = -eE, v_A = (-e/ħ) E×Ω, and F_L = -e v×B.
71
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[S.(2003)]Popovic2003book
author author P. R. S., @noop title Hall Effect Devices (publisher CRC Press, year 2003)NoStop
[Cage et al.(2012)Cage,
Klitzing, Chang, Duncan,
Haldane, Laughlin, Pruisken, and Thouless]Cage2012book
author author M. E. Cage, author K. Klitzing,
author A. Chang, author F. Duncan, author
M. Haldane, author R. B. Laughlin, author A. Pruisken, and author D. Thouless, @noop title The
quantum Hall effect (publisher Springer Science & Business
Media, year 2012)NoStop
[Chien and Westgate(2013)]Chien2013book
author author C. L. Chien and author C. R. Westgate, @noop title The Hall effect and
its applications (publisher Springer Science & Business
Media, year 2013)NoStop
[Nagaosa et al.(2010)Nagaosa, Sinova, Onoda, MacDonald, and Ong]Nagaosa2010May
author author N. Nagaosa, author J. Sinova,
author S. Onoda, author A. H. MacDonald, and author N. P. Ong, title
title Anomalous Hall effect, https://doi.org/10.1103/RevModPhys.82.1539 journal journal Rev. Mod. Phys. volume 82, pages 1539 (year 2010)NoStop
[Xiao et al.(2010)Xiao,
Chang, and Niu]Xiao2010Jul
author author D. Xiao, author M.-C. Chang, and author Q. Niu, title title Berry phase effects on electronic properties, https://doi.org/10.1103/RevModPhys.82.1959 journal
journal Rev. Mod. Phys. volume 82, pages 1959 (year 2010)NoStop
[Klitzing et al.(1980)Klitzing, Dorda, and Pepper]Klitzing1980Aug
author author K. v. Klitzing, author G. Dorda, and author M. Pepper, title title New Method for High-Accuracy
Determination of the Fine-Structure Constant Based on Quantized Hall
Resistance, https://doi.org/10.1103/PhysRevLett.45.494 journal journal Phys. Rev. Lett. volume 45, pages 494 (year 1980)NoStop
[Thouless et al.(1982)Thouless, Kohmoto, Nightingale, and Den Nijs]Thouless1982Aug
author author D. J. Thouless, author M. Kohmoto,
author M. P. Nightingale, and author M. Den Nijs, title title Quantized Hall Conductance in a Two-Dimensional
Periodic Potential, https://doi.org/10.1103/PhysRevLett.49.405
journal journal Phys. Rev. Lett. volume 49, pages 405 (year
1982)NoStop
[Haldane(1988)]Haldane1988Oct
author author F. D. M. Haldane, title title Model for a
Quantum Hall Effect without Landau Levels: Condensed-Matter Realization of
the "Parity Anomaly", https://doi.org/10.1103/PhysRevLett.61.2015
journal journal Phys. Rev. Lett. volume 61, pages 2015 (year
1988)NoStop
[Yu et al.(2010)Yu,
Zhang, Zhang, Zhang,
Dai, and Fang]Yu2010Jul
author author R. Yu, author W. Zhang, author H.-J. Zhang, author
S.-C. Zhang, author
X. Dai, and author
Z. Fang, title title Quantized Anomalous Hall Effect in Magnetic Topological
Insulators, https://doi.org/10.1126/science.1187485 journal journal Science volume
329, pages 61 (year 2010)NoStop
[Chang et al.(2013)Chang,
Zhang, Feng, Shen,
Zhang, Guo, Li, Ou, Wei, Wang, Ji,
Feng, Ji, Chen, Jia, Dai, Fang, Zhang,
He, Wang, Lu, Ma, and Xue]Chang2013Apr
author author C.-Z. Chang, author J. Zhang,
author X. Feng, author
J. Shen, author Z. Zhang, author M. Guo, author K. Li, author Y. Ou, author P. Wei, author
L.-L. Wang, author
Z.-Q. Ji, author Y. Feng, author S. Ji, author X. Chen, author J. Jia, author
X. Dai, author Z. Fang, author S.-C. Zhang, author K. He, author Y. Wang, author L. Lu, author
X.-C. Ma, and author
Q.-K. Xue, title title Experimental Observation of the Quantum Anomalous Hall Effect in a
Magnetic Topological Insulator, https://doi.org/10.1126/science.1234414 journal journal Science volume 340, pages
167 (year 2013)NoStop
[Xiao et al.(2006)Xiao,
Yao, Fang, and Niu]Xiao2006Jul
author author D. Xiao, author Y. Yao, author Z. Fang, and author
Q. Niu, title title Berry-Phase Effect in Anomalous Thermoelectric Transport, https://doi.org/10.1103/PhysRevLett.97.026603 journal
journal Phys. Rev. Lett. volume 97, pages 026603 (year 2006)NoStop
[Son and Spivak(2013)]Son2013Sep
author author D. T. Son and author B. Z. Spivak, title title Chiral anomaly and
classical negative magnetoresistance of Weyl metals, https://doi.org/10.1103/PhysRevB.88.104412 journal journal Phys. Rev. B volume 88, pages 104412 (year 2013)NoStop
[Xu et al.(2014)Xu,
Gan, and Zhang]Xu2014Jun
author author Y. Xu, author Z. Gan, and author S.-C. Zhang, title title Enhanced Thermoelectric Performance and Anomalous
Seebeck Effects in Topological Insulators, https://doi.org/10.1103/PhysRevLett.112.226801 journal
journal Phys. Rev. Lett. volume 112, pages 226801 (year 2014)NoStop
[Nandy et al.(2017)Nandy,
Sharma, Taraphder, and Tewari]Nandy2017Oct
author author S. Nandy, author G. Sharma,
author A. Taraphder, and author S. Tewari, title title Chiral Anomaly as the Origin of the Planar Hall
Effect in Weyl Semimetals, https://doi.org/10.1103/PhysRevLett.119.176804 journal
journal Phys. Rev. Lett. volume 119, pages 176804 (year 2017)NoStop
[Du et al.(2021a)Du, Lu, and Xie]Du2021Nov
author author Z. Z. Du, author H.-Z. Lu, and author X. C. Xie, title title Nonlinear Hall effects, https://doi.org/10.1038/s42254-021-00359-6 journal journal Nat. Rev. Phys. volume 3, pages 744 (year 2021a)NoStop
[Ortix(2021)]Ortix2021Sep
author author C. Ortix, title title Nonlinear Hall Effect with
Time-Reversal Symmetry: Theory and Material Realizations, https://doi.org/10.1002/qute.202100056 journal journal Adv. Quantum Technol. volume 4, pages 2100056 (year 2021)NoStop
[Gao et al.(2014)Gao,
Yang, and Niu]Gao2014Apr
author author Y. Gao, author S. A. Yang, and author Q. Niu, title title Field Induced Positional Shift of Bloch Electrons
and Its Dynamical Implications, https://doi.org/10.1103/PhysRevLett.112.166601 journal
journal Phys. Rev. Lett. volume 112, pages 166601 (year 2014)NoStop
[Sodemann and Fu(2015)]Sodemann2015Nov
author author I. Sodemann and author L. Fu, title title Quantum Nonlinear Hall Effect Induced
by Berry Curvature Dipole in Time-Reversal Invariant Materials, https://doi.org/10.1103/PhysRevLett.115.216806 journal
journal Phys. Rev. Lett. volume 115, pages 216806 (year 2015)NoStop
[Ma et al.(2019)Ma,
Xu, Shen, MacNeill,
Fatemi, Chang, Mier Valdivia,
Wu, Du, Hsu, Fang, Gibson, Watanabe, Taniguchi, Cava, Kaxiras, Lu, Lin, Fu, Gedik, and Jarillo-Herrero]Ma2019Jan
author author Q. Ma, author S.-Y. Xu, author H. Shen, author
D. MacNeill, author
V. Fatemi, author T.-R. Chang, author A. M. Mier Valdivia, author S. Wu, author Z. Du, author C.-H. Hsu,
author S. Fang, author
Q. D. Gibson, author
K. Watanabe, author
T. Taniguchi, author
R. J. Cava, author
E. Kaxiras, author H.-Z. Lu, author H. Lin, author L. Fu, author N. Gedik, and author P. Jarillo-Herrero, title title Observation of the nonlinear Hall
effect under time-reversal-symmetric conditions, https://doi.org/10.1038/s41586-018-0807-6 journal journal Nature volume 565, pages
337 (year 2019)NoStop
[Kang et al.(2019)Kang,
Li, Sohn, Shan, and Mak]Kang2019Apr
author author K. Kang, author T. Li, author E. Sohn, author
J. Shan, and author
K. F. Mak, title title Nonlinear anomalous Hall effect in few-layer WTe_2, https://doi.org/10.1038/s41563-019-0294-7 journal journal Nat. Mater. volume 18, pages
324 (year 2019)NoStop
[Du et al.(2018)Du,
Wang, Lu, and Xie]Du2018Dec
author author Z. Z. Du, author C. M. Wang,
author H.-Z. Lu, and author X. C. Xie, title
title Band Signatures for Strong Nonlinear Hall Effect in
Bilayer WTe_2, https://doi.org/10.1103/PhysRevLett.121.266601 journal
journal Phys. Rev. Lett. volume 121, pages 266601 (year 2018)NoStop
[Du et al.(2019)Du,
Wang, Li, Lu, and Xie]Du2019Jul
author author Z. Z. Du, author C. M. Wang,
author S. Li, author
H.-Z. Lu, and author
X. C. Xie, title title Disorder-induced nonlinear Hall effect with time-reversal
symmetry, https://doi.org/10.1038/s41467-019-10941-3 journal journal Nat. Commun. volume
10, pages 1 (year 2019)NoStop
[Tiwari et al.(2021)Tiwari,
Chen, Zhong, Drueke,
Koo, Kaczmarek, Xiao,
Gao, Luo, Niu, Sun, Yan, Zhao, and Tsen]Tiwari2021Apr
author author A. Tiwari, author F. Chen,
author S. Zhong, author E. Drueke, author
J. Koo, author A. Kaczmarek, author C. Xiao, author J. Gao, author X. Luo, author Q. Niu, author
Y. Sun, author B. Yan, author L. Zhao, and author A. W. Tsen, title title Giant c-axis nonlinear
anomalous Hall effect in T_d-MoTe_2 and WTe_2, https://doi.org/10.1038/s41467-021-22343-5 journal journal Nat. Commun. volume 12, pages 1 (year 2021)NoStop
[He and Weng(2021)]He2021Dec
author author Z. He and author H. Weng, title title Giant nonlinear Hall effect in
twisted bilayer WTe_2, https://doi.org/10.1038/s41535-021-00403-9 journal journal npj Quantum Mater. volume 6, pages 1 (year 2021)NoStop
[Du et al.(2021b)Du, Wang, Sun, Lu, and Xie]Du2021Aug
author author Z. Z. Du, author C. M. Wang,
author H.-P. Sun, author H.-Z. Lu, and author
X. C. Xie, title title Quantum theory of the nonlinear Hall effect, https://doi.org/10.1038/s41467-021-25273-4 journal journal Nat. Commun. volume 12, pages 1 (year 2021b)NoStop
[Lai et al.(2021)Lai,
Liu, Zhang, Zhao,
Feng, Wang, Tang,
Liu, Novoselov, Yang, and Gao]Lai2021Aug
author author S. Lai, author H. Liu, author Z. Zhang, author
J. Zhao, author X. Feng, author N. Wang, author C. Tang, author Y. Liu, author
K. S. Novoselov, author
S. A. Yang, and author
W.-b. Gao, title title Third-order nonlinear Hall effect induced by the Berry-connection
polarizability tensor, https://doi.org/10.1038/s41565-021-00917-0
journal journal Nat. Nanotechnol. volume 16, pages 869 (year
2021)NoStop
[Wang et al.(2021)Wang,
Gao, and Xiao]Wang2021Dec
author author C. Wang, author Y. Gao, and author D. Xiao, title title Intrinsic Nonlinear Hall Effect in
Antiferromagnetic Tetragonal CuMnAs, https://doi.org/10.1103/PhysRevLett.127.277201 journal
journal Phys. Rev. Lett. volume 127, pages 277201 (year 2021)NoStop
[Liu et al.(2021a)Liu, Zhao, Huang, Wu,
Sheng, Xiao, and Yang]Liu2021Dec
author author H. Liu, author J. Zhao, author Y.-X. Huang, author
W. Wu, author X.-L. Sheng, author C. Xiao, and author S. A. Yang, title title Intrinsic
Second-Order Anomalous Hall Effect and Its Application in Compensated
Antiferromagnets, https://doi.org/10.1103/PhysRevLett.127.277202
journal journal Phys. Rev. Lett. volume 127, pages 277202 (year
2021a)NoStop
[Zhang and Fu(2021)]Zhang2021May
author author Y. Zhang and author L. Fu, title title Terahertz detection based on
nonlinear Hall effect without magnetic field, https://doi.org/10.1073/pnas.2100736118 journal journal Proc. Natl. Acad. Sci. U.S.A. volume
118, pages e2100736118 (year 2021)NoStop
[Cao et al.(2022)Cao,
Yu, Leng, Yi, Chen, Yang, Liu, Kong,
Li, Dong, Shi, Bibes, Peng, Zang, and Xiu]Cao2022May
author author X. Cao, author J.-X. Yu,
author P. Leng, author
C. Yi, author X. Chen, author Y. Yang, author S. Liu, author L. Kong, author
Z. Li, author X. Dong, author Y. Shi, author M. Bibes, author R. Peng, author
J. Zang, and author
F. Xiu, title title Giant nonlinear anomalous Hall effect induced by spin-dependent
band structure evolution, https://doi.org/10.1103/PhysRevResearch.4.023100 journal
journal Phys. Rev. Research volume
4, pages 023100 (year 2022)NoStop
[Duan et al.(2022)Duan,
Jian, Gao, Peng,
Zhong, Feng, Mao, and Yao]Duan2022Oct
author author J. Duan, author Y. Jian, author Y. Gao, author
H. Peng, author J. Zhong, author Q. Feng, author J. Mao, and author Y. Yao, title title Giant Second-Order Nonlinear Hall
Effect in Twisted Bilayer Graphene, https://doi.org/10.1103/PhysRevLett.129.186801 journal
journal Phys. Rev. Lett. volume 129, pages 186801 (year 2022)NoStop
[Gao et al.(2023)Gao,
Liu, Qiu, Ghosh,
Trevisan, Onishi, Hu,
Qian, Tien, Chen,
Huang, Béérubéé,
Li, Tzschaschel, Dinh,
Sun, Ho, Lien, Singh, Watanabe, Taniguchi, Bell, Lin, Chang, Du,
Bansil, Fu, Ni, Orth, Ma, and Xu]Gao2023Jun
author author A. Gao, author Y.-F. Liu,
author J.-X. Qiu, author B. Ghosh, author
T. V. Trevisan, author
Y. Onishi, author C. Hu, author T. Qian, author H.-J. Tien,
author S.-W. Chen, author M. Huang, author
D. Béérubéé,
author H. Li, author
C. Tzschaschel, author
T. Dinh, author Z. Sun, author S.-C. Ho, author S.-W. Lien, author B. Singh,
author K. Watanabe, author T. Taniguchi, author
D. C. Bell, author
H. Lin, author T.-R. Chang, author C. R. Du, author A. Bansil, author L. Fu, author N. Ni, author
P. P. Orth, author
Q. Ma, and author
S.-Y. Xu, title title Quantum metric nonlinear Hall effect in a topological
antiferromagnetic heterostructure, https://doi.org/10.1126/science.adf1506 journal journal Science volume 381, pages
181 (year 2023)NoStop
[Wang et al.(2023a)Wang, Zeng,
Duan, and Huang]Wang2023Jul
author author J. Wang, author H. Zeng, author W. Duan, and author
H. Huang, title title Intrinsic Nonlinear Hall Detection of the Néel Vector for
Two-Dimensional Antiferromagnetic Spintronics, https://doi.org/10.1103/PhysRevLett.131.056401 journal
journal Phys. Rev. Lett. volume 131, pages 056401 (year 2023a)NoStop
[Wang et al.(2023b)Wang, Kaplan,
Zhang, Holder, Cao,
Wang, Zhou, Zhou,
Jiang, Zhang, Ru,
Cai, Watanabe, Taniguchi,
Yan, and Gao]Wang2023Sep
author author N. Wang, author D. Kaplan,
author Z. Zhang, author T. Holder, author
N. Cao, author A. Wang, author X. Zhou, author F. Zhou, author Z. Jiang, author
C. Zhang, author S. Ru, author H. Cai, author K. Watanabe,
author T. Taniguchi, author B. Yan, and author
W. Gao, title title Quantum-metric-induced nonlinear transport in a topological
antiferromagnet, https://doi.org/10.1038/s41586-023-06363-3
journal journal Nature volume 621, pages 487 (year
2023b)NoStop
[Zhao et al.(2023)Zhao,
Wang, Ye, Liu, Liao, and Liao]Zhao2023Nov
author author T.-Y. Zhao, author A.-Q. Wang,
author X.-G. Ye, author X.-Y. Liu, author
X. Liao, and author
Z.-M. Liao, title title Gate-Tunable Berry Curvature Dipole Polarizability in Dirac
Semimetal Cd_3As_2, https://doi.org/10.1103/PhysRevLett.131.186302 journal
journal Phys. Rev. Lett. volume 131, pages 186302 (year 2023)NoStop
[Kaplan et al.(2024)Kaplan,
Holder, and Yan]Kaplan2024
author author D. Kaplan, author T. Holder, and author B. Yan, title title Unification of nonlinear anomalous hall effect and
nonreciprocal magnetoresistance in metals by the quantum geometry, https://doi.org/10.1103/PhysRevLett.132.026301 journal
journal Phys. Rev. Lett. volume 132, pages 026301 (year 2024)NoStop
[Li et al.()Li,
Wang, Yang, Zhang,
Teo, Lin, He, Wang, Song, Tian, Loh,
Zhu, Sun, and Wang]Li2024Jan
author author S. Li, author X. Wang, author Z. Yang, author
L. Zhang, author S. L. Teo, author M. Lin, author R. He, author N. Wang, author P. Song, author
W. Tian, author X. J. Loh, author Q. Zhu, author B. Sun, and author X. R. Wang, title title Giant third-order nonlinear
Hall effect in misfit layer compound (SnS)_1.17(NbS_2)_3, https://arxiv.org/abs/arXiv:2401.17808 arXiv:2401.17808 NoStop
[Qin et al.()Qin,
Chen, and Lee]Qin2024Jan
author author F. Qin, author R. Chen, and author C. H. Lee, title title Light-enhanced nonlinear Hall effect, https://arxiv.org/abs/arXiv:2401.18038 arXiv:2401.18038 NoStop
[Tokura and Nagaosa(2018)]Tokura2018Sep
author author Y. Tokura and author N. Nagaosa, title title Nonreciprocal responses
from non-centrosymmetric quantum materials, https://doi.org/10.1038/s41467-018-05759-4 journal journal Nat. Commun. volume 9, pages
1 (year 2018)NoStop
[Rikken et al.(2001)Rikken,
Föölling, and Wyder]Rikken2001Nov
author author G. L. J. A. Rikken, author J. Föölling, and author
P. Wyder, title title Electrical Magnetochiral Anisotropy, https://doi.org/10.1103/PhysRevLett.87.236602 journal
journal Phys. Rev. Lett. volume 87, pages 236602 (year 2001)NoStop
[Rikken and Wyder(2005)]Rikken2005Jan
author author G. L. J. A. Rikken and author P. Wyder, title title
Magnetoelectric Anisotropy in Diffusive Transport, https://doi.org/10.1103/PhysRevLett.94.016601 journal
journal Phys. Rev. Lett. volume 94, pages 016601 (year 2005)NoStop
[Avci et al.(2015)Avci,
Garello, Ghosh, Gabureac,
Alvarado, and Gambardella]Avci2015
author author C. O. Avci, author K. Garello,
author A. Ghosh, author M. Gabureac, author
S. F. Alvarado, and author
P. Gambardella, title
title Unidirectional spin hall magnetoresistance in
ferromagnet/normal metal bilayers, https://www.nature.com/articles/nphys3356 journal journal Nat. Phys. volume 11, pages
570 (year 2015)NoStop
[Yasuda et al.(2016)Yasuda,
Tsukazaki, Yoshimi, Takahashi, Kawasaki, and Tokura]Yasuda2016Sep
author author K. Yasuda, author A. Tsukazaki,
author R. Yoshimi, author K. S. Takahashi, author
M. Kawasaki, and author
Y. Tokura, title title Large unidirectional magnetoresistance in a magnetic topological
insulator, https://doi.org/10.1103/PhysRevLett.117.127202
journal journal Phys. Rev. Lett. volume 117, pages 127202 (year
2016)NoStop
[Fan et al.(2019)Fan,
Shao, Pan, Che, He, Yin, Zheng, Yu,
Nie, Masir et al.]Fan2019
author author Y. Fan, author Q. Shao, author L. Pan, author
X. Che, author Q. He, author G. Yin, author C. Zheng, author G. Yu, author
T. Nie, author M. R. Masir, et al., title
title Unidirectional magneto-resistance in modulation-doped
magnetic topological insulators, https://pubs.acs.org/doi/full/10.1021/acs.nanolett.8b03702 journal journal Nano Lett. volume
19, pages 692 (year 2019)NoStop
[Guillet et al.(2020)Guillet, Zucchetti, Barbedienne,
Marty, Isella, Cagnon,
Vergnaud, Jaffrès, Reyren,
George, Fert, and Jamet]Guillet2020Jan
author author T. Guillet, author C. Zucchetti,
author Q. Barbedienne, author A. Marty, author
G. Isella, author L. Cagnon, author C. Vergnaud, author H. Jaffrès, author N. Reyren, author J.-M. George, author A. Fert, and author M. Jamet, title title Observation of Large Unidirectional
Rashba Magnetoresistance in Ge(111), https://doi.org/10.1103/PhysRevLett.124.027201 journal
journal Phys. Rev. Lett. volume 124, pages 027201 (year 2020)NoStop
[Liu et al.(2021b)Liu, Holder, and Yan]Liu2021Feb
author author Y. Liu, author T. Holder, and author B. Yan, title title Chirality-Induced Giant Unidirectional
Magnetoresistance in Twisted Bilayer Graphene, https://doi.org/10.1016/j.xinn.2021.100085 journal journal Innovation volume 2, pages
100085 (year 2021b)NoStop
[Liu et al.(2021c)Liu, Tan, and Yan]Liu2021Oct
author author Y. Liu, author H. Tan, and author B. Yan, title title Higher-order quantum magnetic inductions in
chiral topological materials, https://doi.org/10.1103/PhysRevB.104.155131 journal journal Phys. Rev. B volume 104, pages 155131 (year 2021c)NoStop
[Liu and Yan(2024)]Liu2024Jan
author author Y. Liu and author B. Yan, title title Anomalous circularly polarized light
emission induced by the optical Berry curvature dipole, https://doi.org/10.1103/PhysRevB.109.035142 journal journal Phys. Rev. B volume 109, pages 035142 (year 2024)NoStop
[Liu et al.()Liu,
Souza, and Tsirkin]Liu2023Mar
author author X. Liu, author I. Souza, and author S. S. Tsirkin, title title Electrical magnetochiral anisotropy
in trigonal tellurium from first principles, https://arxiv.org/abs/arXiv:2303.10164 arXiv:2303.10164 NoStop
[He et al.(2019)He,
Zhang, Zhu, Shi,
Heinonen, Vignale, and Yang]He2019Jul
author author P. He, author S. S.-L. Zhang,
author D. Zhu, author
S. Shi, author O. G. Heinonen, author G. Vignale, and author H. Yang, title title Nonlinear
Planar Hall Effect, https://doi.org/10.1103/PhysRevLett.123.016801 journal
journal Phys. Rev. Lett. volume 123, pages 016801 (year 2019)NoStop
[Huang et al.(2023)Huang,
Feng, Wang, Xiao, and Yang]Huang2023Mar
author author Y.-X. Huang, author X. Feng,
author H. Wang, author
C. Xiao, and author
S. A. Yang, title title Intrinsic Nonlinear Planar Hall Effect, https://doi.org/10.1103/PhysRevLett.130.126303 journal
journal Phys. Rev. Lett. volume 130, pages 126303 (year 2023)NoStop
[Tan et al.(2021)Tan,
Liu, and Yan]Tan2021Jun
author author H. Tan, author Y. Liu, and author B. Yan, title title Unconventional anomalous Hall effect from
magnetization parallel to the electric field, https://doi.org/10.1103/PhysRevB.103.214438 journal journal Phys. Rev. B volume 103, pages 214438 (year 2021)NoStop
[Karplus and Luttinger(1954)]Karplus1954Sep
author author R. Karplus and author J. M. Luttinger, title title Hall Effect in
Ferromagnetics, https://doi.org/10.1103/PhysRev.95.1154
journal journal Phys. Rev. volume 95, pages 1154 (year
1954)NoStop
[Moore and Orenstein(2010)]Moore2010Jul
author author J. E. Moore and author J. Orenstein, title title Confinement-Induced
Berry Phase and Helicity-Dependent Photocurrents, https://doi.org/10.1103/PhysRevLett.105.026805 journal
journal Phys. Rev. Lett. volume 105, pages 026805 (year 2010)NoStop
[Chang and Niu(1996)]Chang1996Mar
author author M.-C. Chang and author Q. Niu, title title Berry phase, hyperorbits, and the
Hofstadter spectrum: Semiclassical dynamics in magnetic Bloch bands, https://doi.org/10.1103/PhysRevB.53.7010 journal
journal Phys. Rev. B volume 53, pages 7010 (year 1996)NoStop
[SM()]SM
@noop note Supplemental materials containing details
about (i) analytical result for nonlinear Hall (NLH) conductivity in massive
Dirac model, (ii) calculation methods of the band structures and conductivity
tensors, (iii) derivation of unidirectional magnetoresistance (UMR) effect,
(iv) analytical and numerical results of NLH and UMR effects in Weyl models,
and (v) temperature dependence of NLH and UMR conductivities. References
<cit.> are cited there.Stop
[Sinitsyn et al.(2005)Sinitsyn, Niu, Sinova, and Nomura]Sinitsyn2005
author author N. A. Sinitsyn, author Q. Niu,
author J. Sinova, and author K. Nomura, title
title Disorder effects in the anomalous hall effect induced by
berry curvature, https://doi.org/10.1103/PhysRevB.72.045346
journal journal Phys. Rev. B volume 72, pages 045346 (year
2005)NoStop
[Sinitsyn(2008)]Sinitsyn2008Jan
author author N. A. Sinitsyn, title title Semiclassical theories
of the anomalous hall effect, https://doi.org/10.1088/0953-8984/20/02/023201 journal
journal J. Phys.: Condens. Matte volume
20, pages 023201 (year 2008)NoStop
[Hou et al.(2015)Hou,
Su, Tian, Jin, Yang, and Niu]DHou2015
author author D. Hou, author G. Su, author Y. Tian, author
X. Jin, author S. A. Yang, and author Q. Niu, title title
Multivariable scaling for the anomalous hall effect, https://doi.org/10.1103/PhysRevLett.114.217203 journal
journal Phys. Rev. Lett. volume 114, pages 217203 (year 2015)NoStop
[Atencia et al.(2023)Atencia, Xiao, and Culcer]Rhonald2023
author author R. B. Atencia, author D. Xiao, and author D. Culcer, title title Disorder in the nonlinear anomalous hall effect of
𝒫𝒯-symmetric dirac fermions, https://doi.org/10.1103/PhysRevB.108.L201115 journal
journal Phys. Rev. B volume 108, pages L201115 (year 2023)NoStop
[Gong et al.(2019)Gong,
Guo, Li, Zhu, Liao, Liu, Zhang, Gu,
Tang, Feng, Zhang,
Li, Song, Wang, Yu, Chen, Wang, Yao,
Duan, Xu, Zhang,
Ma, Xue, and He]Gong2019Jun
author author Y. Gong, author J. Guo, author J. Li, author
K. Zhu, author M. Liao, author X. Liu, author Q. Zhang, author L. Gu, author
L. Tang, author X. Feng, author D. Zhang, author W. Li, author C. Song, author
L. Wang, author P. Yu, author X. Chen, author Y. Wang, author H. Yao, author
W. Duan, author Y. Xu, author S.-C. Zhang, author X. Ma, author Q.-K. Xue, and author K. He, title title Experimental Realization of an Intrinsic Magnetic
Topological Insulator∗, https://doi.org/10.1088/0256-307X/36/7/076801 journal
journal Chin. Phys. Lett. volume 36, pages 076801 (year 2019)NoStop
[Li et al.(2019)Li,
Li, Du, Wang, Gu, Zhang, He, Duan, and Xu]Li2019Jun
author author J. Li, author Y. Li, author S. Du, author
Z. Wang, author B.-L. Gu, author S.-C. Zhang, author K. He, author W. Duan, and author Y. Xu, title title Intrinsic magnetic topological insulators in van
der Waals layered MnBi_2Te_4-family materials, https://doi.org/10.1126/sciadv.aaw5685 journal journal Sci. Adv. volume 5, pages
eaaw5685 (year 2019)NoStop
[Bernevig et al.(2022)Bernevig, Felser, and Beidenkopf]Bernevig2022Mar
author author B. A. Bernevig, author C. Felser, and author H. Beidenkopf, title title Progress and prospects in magnetic
topological materials, https://doi.org/10.1038/s41586-021-04105-x
journal journal Nature volume 603, pages 41 (year 2022)NoStop
[Du et al.(2020)Du,
Tang, Li, Lin, Xu, Duan, and Rubio]Du2020May
author author S. Du, author P. Tang, author J. Li, author
Z. Lin, author Y. Xu, author W. Duan, and author A. Rubio, title title Berry curvature engineering by gating
two-dimensional antiferromagnets, https://doi.org/10.1103/PhysRevResearch.2.022025 journal
journal Phys. Rev. Research volume
2, pages 022025 (year 2020)NoStop
[Elcoro et al.(2019)Elcoro,
Etxebarria, Gallego, Perez-Mato, and Tasci]Elcoro2019May
author author L. Elcoro, author J. Etxebarria,
author S. V. Gallego, author J. M. Perez-Mato, and author E. S. Tasci, title
title Automatic calculation of symmetry-adapted tensors in
magnetic and non-magnetic materials: a new tool of the Bilbao
Crystallographic Server, https://doi.org/10.1107/S2053273319001748 journal journal Acta Crystallogr., Sect. A: Found. Adv. volume 75, pages 438 (year 2019)NoStop
[Kresse and Furthmüller(1996)]VASP
author author G. Kresse and author J. Furthmüller, title title Efficient iterative
schemes for ab initio total-energy calculations using a plane-wave basis
set, https://doi.org/10.1103/PhysRevB.54.11169 journal journal Phys. Rev. B volume
54, pages 11169 (year 1996)NoStop
[Perdew et al.(1996)Perdew,
Burke, and Ernzerhof]PBE
author author J. P. Perdew, author K. Burke, and author M. Ernzerhof, title title Generalized gradient approximation made simple, https://doi.org/10.1103/PhysRevLett.77.3865 journal
journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop
[Marzari and Vanderbilt(1997)]wannier1
author author N. Marzari and author D. Vanderbilt, title title Maximally localized
generalized wannier functions for composite energy bands, https://doi.org/10.1103/PhysRevB.56.12847 journal journal Phys. Rev. B volume 56, pages 12847 (year 1997)NoStop
[Souza et al.(2001)Souza,
Marzari, and Vanderbilt]wannier2
author author I. Souza, author N. Marzari, and author D. Vanderbilt, title title Maximally localized wannier functions
for entangled energy bands, https://doi.org/10.1103/PhysRevB.65.035109 journal journal Phys. Rev. B volume 65, pages 035109 (year 2001)NoStop
[Mostofi et al.(2014)Mostofi, Yates, Pizzi, Lee,
Souza, Vanderbilt, and Marzari]wannier3
author author A. A. Mostofi, author J. R. Yates,
author G. Pizzi, author Y.-S. Lee, author
I. Souza, author D. Vanderbilt, and author N. Marzari, title title An
updated version of wannier90: A tool for obtaining maximally-localised
wannier functions, https://doi.org/https://doi.org/10.1016/j.cpc.2014.05.003 journal journal Comput. Phys. Commun. volume 185, pages 2309 (year
2014)NoStop
[Ye et al.(2022)Ye,
Xie, Lv, Huang, Yang, Jiang, Liu, Zhu,
Qiu, Tong, Zhou,
Hsu, Chang, Lin,
Li, Yang, Wang, Jiang, and Renshaw Wang]Ye2022Feb
author author C. Ye, author X. Xie, author W. Lv, author
K. Huang, author A. J. Yang, author S. Jiang, author X. Liu, author D. Zhu, author X. Qiu, author M. Tong, author
T. Zhou, author C.-H. Hsu, author G. Chang, author H. Lin, author P. Li, author K. Yang, author Z. Wang, author
T. Jiang, and author
X. Renshaw Wang, title
title Nonreciprocal Transport in a Bilayer of MnBi2Te4 and
Pt, https://doi.org/10.1021/acs.nanolett.1c04756 journal journal Nano Lett. volume
22, pages 1366 (year 2022)NoStop
|
http://arxiv.org/abs/2409.02372v1 | 20240904014124 | A Principal Square Response Forward Regression Method for Dimension Reduction | [
"Zheng Li",
"Yunhao Wang",
"Wei Gao",
"Hon Keung Tony Ng"
] | stat.ME | [
"stat.ME"
] |
Multifractaility, topology and anomalous Hall conductivity on a 30 degrees twisted bilayer honeycomb lattice
Grigory Bednik
============================================================================================================
Multifractaility, topology and anomalous Hall conductivity on a 30 degrees twisted bilayer honeycomb lattice
Grigory Bednik
============================================================================================================
§ ABSTRACT
Dimension reduction techniques, such as Sufficient Dimension Reduction (SDR), are indispensable for analyzing high-dimensional datasets. This paper introduces a novel SDR method named Principal Square Response Forward Regression (PSRFR) for estimating the central subspace of the response variable Y, given the vector of predictor variables X. We provide a computational algorithm for implementing PSRFR and establish its consistency and asymptotic properties. Monte Carlo simulations are conducted to assess the performance, efficiency, and robustness of the proposed method. Notably, PSRFR exhibits commendable performance in scenarios where the variance of each component becomes increasingly dissimilar, particularly when the predictor variables follow an elliptical distribution. Furthermore, we illustrate and validate the effectiveness of PSRFR using a real-world dataset concerning wine quality. Our findings underscore the utility and reliability of the PSRFR method in practical applications of dimension reduction for high-dimensional data analysis.
Keywords: central subspace; principal square response forward regression; regression model; sufficient dimension reduction.
§ INTRODUCTION
The convergence of technological advancements, the digitalization of society, the big data paradigm, computational capabilities, and the rise of machine learning and artificial intelligence has fueled the rapid growth of high-dimensional data across numerous domains such as personalized medicine <cit.>, computer vision <cit.>, econometrics <cit.>, and causal inference <cit.>. Sufficient dimension reduction (SDR), a statistical method to extract essential information from high-dimensional data while reducing the dimensionality of the data, stands out as a pivotal tool in the analysis of high-dimensional data. The primary goal of SDR is to identify linear combinations of the independent variables that capture all the relevant information about the conditional distribution of the response variable Y given the vector of predictor variables X, i.e., Y |X. By reducing the dimensionality of the data while retaining the essential information, SDR enables more efficient and effective analysis of high-dimensional datasets. In many practical applications, the underlying parametric model is often unknown. In such cases, <cit.> proposed a general model that assumes an ideal scenario where the high-dimensional vector of predictor variables, X, can be reconstructed from low-dimensional projections for the purpose of regressing Y on X:
Y=g(β_1^⊤X,β_2^⊤X,…,β_k^⊤X,ε),
where Y is the one-dimensional response variable, X=(X_1,…,X_p)^⊤ is 𝑝-dimensional vector of predictors, g is a ℝ^k+1→ℝ unknown link function, β_i's are unknown non-random column vectors, and ε is the error term which is independent of X.
Let B=( β_1,…, β_k) ∈ℝ^p× k (k≤ p) be a p× k matrix with columns β_i, i=1,…,k. Then, Y depends on X only through B^⊤X, and the purpose of SDR is to find a matrix B such that
Y⊥⊥X|B^⊤X,
where ⊥⊥ represents independence. The space Span(B), Spanned by these linear combinations, is often called the effective dimension reduction (EDR) space. Note that B always exists because B degenerates into an identity matrix when k = p, and it is not unique because if Eq. (<ref>) is true, then Y ⊥⊥X|PB^⊤X for any nonsingular matrix P. Thus, the identifiable parameter here is the subspace Span(B) rather than B itself. <cit.> introduced the central dimension reduction subspace (the smallest) to address the uniqueness problem of EDR space. If a space is an EDR space and this space is contained in any EDR space, then this space is called the central dimension reduction subspace, denoted as S_Y|X.
Considering the mean function of regression E(Y|X), the purpose of SDR is to find a p × k matrix B such that
Y⊥⊥E(Y|X) |B^⊤X,
where Span(B) is called the mean dimension reduction space <cit.>, which is equivalent to
E(Y|X)=E(Y|B^⊤X)=h(β_1^⊤X,…,β_k^⊤X).
Similarly, if a subspace is a mean dimension reduction space and it is contained in any mean dimension reduction space, then this subspace is called the central mean dimension reduction subspace (the smallest), denoted as S_E(Y|X). As expressed in <cit.>, S_E(Y|X)⊆S_Y|X, that gives Eq. (<ref>) from Eq. (<ref>).
In general, the methods to obtain the SDR estimator S_Y|X can be classified into two categories: inverse regression and forward regression <cit.>.
For inverse regression methods, the sliced inverse regression (SIR) was first proposed by <cit.>, where “inverse regression" refers to the conditional expectation E(X| Y) with Var{E(X| Y)} contained in S_Y|X. This is achieved by assuming a linear conditional mean (LCM) for the basis matrix B, which serves as a fundamental assumption for numerous dimension reduction techniques.
Given that S_Y|X remains invariant when B is multiplied by any k× k full-rank matrix, this property holds for all possible unknown β_i in practice. This equivalence extends to X following an elliptically symmetric distribution.
Inspired by the SIR, other “inverse regression” methods designed for estimating S_Y|X have been studied. These methods include the sliced average variance estimate (SAVE) based on second-order conditional moments <cit.>,
parametric inverse regression <cit.>, canonical correlation estimator <cit.>, contour regression <cit.>, inverse regression estimator (IRE) <cit.>, principal fitted components <cit.>, likelihood acquired directions <cit.>, directional reduction <cit.>, elliptically contour inverse predictors <cit.>, elliptical sliced inverse regression <cit.>, generalized
kernel-based inverse regression <cit.>, and functional SDR estimators <cit.>.
For forward regression methods that focus on the conditional distribution of Y given X, i.e., Y|X. <cit.> first introduced the ordinary least squares (OLS) as a dimension reduction method, known for its intuitive nature and straightforward algorithm. However, the primary drawback of the OLS method lies in its capability to identify only one vector, and its performance suffers notably when the dimension of S_E(Y|X) exceeds one. <cit.> proposed principal Hessian directions (PHD) by finding the Hessian matrix for E(Y|X) with the application of Stein's lemma.
Based on utilizing an iterative approach to estimate S_E(Y|X) with the PHD method,
<cit.> introduced the
iterative Hessian transformation (IHT) method. This method was further studied by <cit.>. Additionally, <cit.> proposed the generalized PHD (GPHD), and <cit.> proposed the adjusted PHD (APHD) methods for mixture multivariate skew elliptical distributions and non-Gaussian predictors, respectively. The IHT, GPHD, and APHD methods can be applied to a wider range of scenarios due to their less restrictive conditions compared to the PHD method. Other forward regression dimension reduction methods, such as the minimum average variance estimator (MAVE) <cit.>, sliced regression <cit.>, ensemble of minimum average variance estimators <cit.>, semiparametric dimension reduction method <cit.>, outer-product-gradient method (OPG) <cit.> and optimal SDR <cit.>, have been developed in the literature.
This paper introduces a principal square response forward regression (PSRFR) estimator for dimension reduction. The proposed approach leverages the OLS estimator from a fresh angle, thus resolving the challenge of OLS's limited capability to recover only one dimension.
In contrast to the PHD method, we demonstrate that the proposed PSRFR approach may be applicable under the assumption of elliptical distributions. Moreover, the PSRFR method offers greater simplicity and intuitiveness compared to the IHT method. Additionally, it can identify more central subspace directions in certain scenarios than the PHD and IHT methods.
The rest of this paper is organized as follows. In Section <ref>, we propose the PSRFR estimator and derive its consistent and asymptotic normal distribution. A simulation study is conducted in Section <ref>, demonstrating that the PSRFR estimator outperforms existing methods when the variance of each component of X is significantly different under the assumption of an elliptical distribution, and it also showcases its robustness. In Section <ref>, we investigate the Wine Quality dataset through a real data analysis. Section <ref> provides some concluding remarks for this paper.
§ PROPOSED METHODOLOGY
In this section, we introduce the proposed PSRFR estimator for Span(B), which is the smallest dimension reduction subspace we are interested in, and examine its associated theoretical properties.
Here, we assume that the structural dimensions (the rank of matrix B) are known, we do not attempt to determine the dimension of the central subspace in this paper. And hence, estimating Span(B) is equivalent to estimating the direction of the basis of Span(B). For convenience, we consider that B satisfies B^⊤B=_k, where _k is a k-dimensional identity matrix.
§.§ PSRFR Estimator
To introduce the proposed PSRFR estimator, we start with the OLS estimator under the elliptical distributions with the following assumption.
The distribution of X is an elliptical distribution with mean E(X)=0 and variance-covariance matrix Var(X)=Σ_X.
The following Lemma <ref> shows that E(YX) fall in the central mean dimension reduction subspace S_E(Y|X).
For Y and X that satisfy Eq. (<ref>), Assumption <ref> implies
E(YX)=Σ_XΛ,
where Λ=(λ_1, …,λ_k)^⊤ is a constant vector.
The proof of Lemma <ref> is provided in the Appendix . <cit.>, <cit.> <cit.>, <cit.>, and <cit.> similarly provide results analogous to those in Lemma <ref>.
Note that the OLS estimator is a vector representation of a basis in S_E(Y|X), which is only a precise estimator when the structure dimension is one. According to Eq. (<ref>), E(YX) is a linear combination of {β_i}_i=1^k,
which indicates that E(YX) lies in the hyperplane Spanned by {β_i}_i=1^k. To obtain a complete set of the basis of S_E(Y|X), it is required to determine a set of standard orthogonal basis for S_E(Y|X). Without loss of generality, we assume that X has mean 0 and an identity matrix as the variance-covariance matrix, and {(Y_j,X_j)}_j=1^n is a set of independent and identically distributed samples from Eq. (<ref>).
The crux of the issue lies in estimating the basis of S_E(Y|X) using {Z_j}_j=1^n, where Z_j=Y_jX_j. This can be intuitively solved by minimizing the sum of distances from all sample points to the hyperplane spanned by the basis matrix B. Given that B^⊤B=_k, the distance from Z_j to the hyperplane Span(B) can be expressed as follows:
d_j =Z_j-BB^⊤Z_j_2^2
= Z_j^⊤Z_j-Z_j^⊤BB^⊤Z_j,
where ||·||_2 represents L_2 norm. Then, the minimization problem to obtain the estimator for Span(B) can be expressed as
min_B∑_j=1^n d_j
=
min_B∑_j=1^n(Z_j^⊤Z_j-Z_j^⊤ BB^⊤Z_j)
=min_{β_i}_i=1^k∑_j=1^n (Z_j^⊤Z_j- ∑_i=1^kZ_j^⊤β_iβ_i^⊤Z_j ),
which is equivalent to the maximization problem
max_{β_i}_i=1^k∑_j=1^n∑_i=1^kZ_j^⊤β_iβ_i^⊤Z_j =max_{β_i}_i=1^k∑_j=1^n∑_i=1^kβ_i^⊤Z_jZ_j^⊤β_i
=max_{β_i}_i=1^k∑_i=1^kβ_i^⊤ ( ∑_j=1^nZ_jZ_j^⊤ )β_i.
It can be shown that the solution of the above optimization problem is the first k eigenvectors corresponding to the first k eigenvalues of ∑_j=1^nZ_jZ_j^⊤.
For the practical situation that the mean E(X) and variance-covariance matrix of X, Σ_X, are unknown,
the sample mean X and the sample variance S_n based on the observed sample can be used to approximate E(X) and Σ_X, respectively. Then, the data can be transformed as Z_j=S_n^-1Y_j(X_j-X̅).
The aforementioned results describe how the PSRFR estimator of Span(B) can be obtained. Hence, a set of standard orthogonal bases in Span(B) can also be estimated similarly. The algorithm to obtain the PSRFR estimator, namely the PSRFR algorithm, is presented in Algorithm 1.
At the population level, the PSRFR estimator is based on the p × p positive-defined matrix
Σ_X^-1E[Y^2{X-E(X)}{X-E(X)}^⊤]Σ_X^-1,
which is associated with the principal components analysis (PCA) and the PHD method, as described below.
For PCA, the principal components of X are defined as the set of linear combinations of X that have the largest variances. The first principal component of X can be obtained by solving the maximization problem
max_||γ ||_2=1γ^⊤E[{X-E(X)}{X-E(X)}^⊤]γ,
in which the solution is the first eigenvector of E[{X-E(X)}{X-E(X)}^⊤]. Similarly, the first k principal components of X are the first k eigenvectors of Σ_X. After obtaining the principal components, a regression model such as the one presented in Eq. (<ref>) can be constructed.
However, projecting the p-dimensional predictor variables onto lower dimensions first might inadvertently result in an inaccurate relationship between Y and the original X. In contrast, the proposed PSRFR method systematically establishes the connections between X and Y at the beginning of the dimension reduction process.
For the PHD method, it is based on the Hessian matrix of twice differentiable regression function E(Y|X), denoted as H_m(X). By the chain rule, we have
H_m(X) =∂^2E(Y|X)/∂X∂X^⊤=B∂^2E(Y|B^⊤X)∂ (B^⊤X)∂ (X^⊤B)B^⊤.
Let H_m(X)=E{H_m(X)}. If X follows a normal distribution, H_m(X) can be transformed into an easily solvable form by Stein's lemma, Σ_X^-1Σ_YXXΣ_X^-1,
where
Σ_YXX=E[{Y-E(Y)}{X-E(X)}{X-E(X)}^⊤],
which is known as the response-based (y-based) version of the PHD method. In contrast to the PHD approach, besides extending the normal distribution assumption to the elliptical distribution assumption, the proposed PSRFR replaces Y with Y^2 in Eq. (<ref>), potentially leading to notable effects on the estimators.
The major difference between the proposed PSRFR approach and the PHD approach is the term E(Y^2|X) and E(Y|X) involved according to the law of iterated expectation. Consider the following model in <cit.>:
Y=f(β_1^⊤X)+g(β_2^⊤X)ε,
where ε is dependent on X with zero mean, β_1,β_2∈ℝ^p, and f and g are unknown link functions.
Since E(Y|X)=f(β_1^⊤X) and E(Y^2|X)=f^2(β_1^⊤X)+g^2(β_2^⊤X)Var(ε), the PSRFR method can identify more central subspace directions than the PHD method. Moreover,
E(Y^2|X)=Var(Y|X)+E(Y|X)E(Y|X)
according to the definition of conditional variance. Here, E(Y^2|X) contains Var(Y|X), and not just E(Y|X).
Remark <ref> shows the PSRFR might identify more central subspace directions in the above example under the model in Eq. (<ref>); hence, the EDR space estimated by the PSRFR is no longer limited to the central mean subspace. Furthermore, we are interested in
E(Y^2|X)=E(Y^2|B^⊤X)=H(β_1^⊤X, …,β_k^⊤X).
Due to the non-uniqueness of B and for ease of expression in the remainder of the paper, Span(B) is used to represent the dimension reduction subspace in Eq. (<ref>), where Span(B) ⊆S_Y|X, which makes it easy to derive Eq. (<ref>) from Eq. (<ref>).
§.§ Asymptotic Properties
In this subsection, we will prove that {β̂_i}_i=1^k converge to a set of standard orthogonal basis for Span(B) under some mild conditions.
Under the model in Eq. (<ref>), the inequality
E{Y^2(β^⊤Σ_X^-1X)^2}>E{Y^2(α^⊤Σ_X^-1X)^2}
holds, where β∈Span(B), α∈Span(B)^⊥, β_2=α_2=1, and the symbol ⊥ represents the orthogonal complement.
In Assumption <ref>, the term on the left-hand side of the inequality in Eq. (<ref>)
E{Y^2(β^⊤Σ_X^-1X)^2}=(β^⊤BΛ)^2+Var(Yβ^⊤Σ_X^-1X),
which represents the sum of a fixed term (β^⊤BΛ)^2
and the variance of some random variables,
and the term of the right-hand side of the inequality in Eq. (<ref>)
E{Y^2(α^⊤Σ_X^-1X)^2}=Var(Yα^⊤Σ_X^-1X)
represents the variance of some random variables.
For Model in Eq. (<ref>), under Assumptions <ref> and <ref>, the first k eigenvectors corresponding to the first k eigenvalues of E(ZZ^⊤) are the basis of Span(), where Z=Σ_X^-1YX, and E(ZZ^⊤)-Σ_X^-1E() is contained in Span(), where E()=E( Y^2 w_k+1^2), w_k+1 is the (k+1)-th element of
W=X=(w_1,…,w_k,w_k+1,…,w_p)^⊤, and =(β_1,…,β_k,α_k+1,…,α_p)^⊤≡( ,)^⊤ is an orthogonal matrix.
The proof of Theorem <ref> is provided in Appendix .
Although E(ZZ^⊤) does not fall completely in Span(B), a set of basis can still be found through the eigendecomposition of E(ZZ^⊤), namely,
E(ZZ^⊤)=[ _1 _0 ][ Ψ_1 0; 0 Ψ_0 ][ _1^⊤; _0^⊤ ]=_1Ψ_1_1^⊤+_0Ψ_0_0^⊤,
where = ( _1,_0 ) is a p-dimensional orthogonal matrix, Ψ_1 and Ψ_0 are diagonal matrices with dimensions k and p-k, respectively. The diagonal elements in Ψ_1 and Ψ_0 are ordered from large to small.
In the proof of Theorem <ref>, we show that
E(ZZ^⊤)=BΓ _1B^⊤+AΓ _2A^⊤,
where Γ_1 is a k × k positive-defined matrix with the (i, j)-th elements is E{ H(w_1,…,w_k) w_iw_j}, Γ _2 is a (p-k) × (p-k) diagonal matrix with diagonal elements E(G).
Consider that Γ_1 is not a diagonal matrix in Eq. (<ref>), we can rewrite BΓ _1B^⊤=BVΦV^⊤B^⊤ by eigendecomposition, where V and Φ are k-dimensional orthonormal and diagonal matrices, respectively.
Then, it is sufficient to show Span(B)=Span(BV)=Span(Q_1), where the pivotal challenge arises in identifying Span(B) through eigendecomposition of E(ZZ^⊤).
Under Assumption <ref>, the eigenvalues corresponding to the eigenvectors of Γ_1 exceed those corresponding to the eigenvectors of Γ_2, which guarantees that the first k eigenvectors of E(ZZ^⊤) corresponding to the first k eigenvalues must be the basis of Span(B).
Thus, the basis can be determined by finding the first k eigenvectors corresponding to the first k eigenvalues.
For the model in Eq. (<ref>), if X follows a multivariate Gaussian distribution, then Σ_X^-1E[{Y^2-E(Y^2)}XX^⊤]Σ_X^-1 is contained in Span(B).
In the case of Remark <ref>, the proposed PSRFR method can still identify more central subspace directions. More details are provided in the proof of Theorem <ref> presented in Appendix .
Moreover, based on the idea that square response is contained in the PSRFR method, if one first transforms the response variable Y into Y^2 and then applies those SDR methods that focus on the E(Y|X),
more central subspace directions can be identified in the case of Remark <ref>.
Following Theorem <ref>, the asymptotic properties of the estimator {β̂_i}_i=1^k can be established, and the results are given in Theorem <ref>.
For the model in Eq. (<ref>), under Assumptions <ref> and <ref>, if E{E(Y|X)^2}
<∞ holds, then
Span(β̂_1,…,β̂_k)Pr⟶Span(β_1,…,β_k),
and Span() converges to Span() at rate n^1/2.
In addition, if Var{vec(ZZ^⊤) } exists, then
√(n) [ vec(𝒵̂)- vec{E(ZZ^⊤)} ]L⟶N[ 0, Var{vec(ZZ^⊤) }],
where vec(·) is the operator that maps a symmetric matrix to a vector by stacking the
main diagonal and the elements below the main diagonal by columns, i.e., if S is a p × p symmetric matrix
S =
[ s_11 s_12 s_13 ⋯ s_1p; s_12 s_22 s_23 ⋯ s_2p; s_13 s_23 s_33 ⋯ s_3p; ⋮ ⋮ ⋮ ⋯ ⋮; s_1p s_2p s_3p ⋯ s_pp; ],
then vec(S) = (s_11, …, s_1p, s_22,…, s_2p, s_33, …, s_3p, …, s_pp)^⊤.
The proof of Theorem <ref> is provided in Appendix .
Note that
E(Y^2)=E{E(Y^2|X)}=E{H(β_1^⊤X,⋯,β_k^⊤X)}≥E{E(Y|X)^2},
hence, in Theorem <ref>, instead of the condition E{E(Y|X)^2}<∞, we can use E(Y^2)<∞.
Because 𝒵̂, ZZ^⊤, and E(ZZ^⊤) are p × p symmetric matrices, the dimensions of vec(𝒵̂), vec(ZZ^⊤ ), and vec{E(ZZ^⊤)} are p × (p+1)/2.
Notice that Var{vec(ZZ^⊤) } is a p(p+1)/2 by p(p+1)/2 matrix. We represent the Var{vec(ZZ^⊤) } by the Kronecker product ⊗ in the proof of Theorem <ref>, the dimension of (ZZ^⊤)⊗(ZZ^⊤) is p^2 by p^2. Although the Kronecker product representation has a degenerate variance (all the elements in symmetric matrices ZZ^⊤ are used), this representation is for notation convenience only as our concern is the convergence of each element of variance of the random matrix ZZ^⊤.
Theorem <ref> shows that Span(B̂) are √(n)-consistency estimators of Span(B) by the law of large number. Although the first k eigenvectors corresponding to the first k eigenvalue of E(ZZ^⊤) are not a set of original basis in Eq. (<ref>), they represent the same space. Moreover, 𝒵̂=n^-1∑_j=1^n(Z_jZ_j^⊤) is a √(n)-consistency estimator of E(ZZ^⊤).
Additionally, if Var{vec(ZZ^⊤) } exists, the asymptotic normality property is obtained by the central limit theorem.
§ MONTE CARLO SIMULATION STUDY
In this section, we evaluate the performance of the proposed PSRFR method by using a Monte Carlo simulation study. To measure the distance between the true subspace Span(B) and the corresponding estimator Span(B̂) for B̂=(β̂_̂1̂, …,β̂_̂k̂), we consider the trace correlation defined as <cit.>
R = trace (P_BP_B̂ )k,
where P_B=B(B^⊤B)^-1B^⊤ denotes the projection matrix.
Without loss of generality, we assume B̂ is a column-orthogonal matrix due to the property of B. Otherwise, the Gram-Schmidt ortho-normalization method can be used and will not change the subspace. Then, the trace correlation based on the estimator B̂ in Eq. (<ref>) can be calculated as
R =trace (B̂^⊤BB^⊤B̂ )k.
Here, the trace correlation R can be used to evaluate and compare the performance of different estimation methods. The trace correlation is a value between 0 and 1, and a larger value of R indicates a better estimator B̂. In the following subsections, we consider X follows elliptical distributions in Section <ref> to investigate the validity of the proposed PSRFR method, and X follows non-elliptical distributions in Section <ref> to evaluate the robustness of different methods.
§.§ Elliptical distributions
In this subsection, two types of elliptical distributions,
normal and non-normal distributions are considered as the distribution of the predictor variables X.
§.§.§ Normal distribution
First, we compare the proposed PSRFR method to the PHD and IHT methods under the model in <cit.> described in Remark <ref>. The following two models are considered in the simulation study:
* Model [N1]
Y=β_1^⊤X+β_2^⊤X·ε.
* Model [N2]
Y=sin(β_1^⊤X)+(|β_2^⊤X+1|)^1/2·ε.
We consider p=dim(X) = 10, X∼N_10(0, Σ_norm), ε∼N(0,1), β_1=(1,0,0,…,0) and β_2=(0,1,0,…,0), where Σ_norm is a diagonal matrix with the diagonal elements (1,2,3, …, 10), and the sample sizes are n = 100, 300 and 500.
We use the trace correlation as a comparison criterion and also consider the cosine similarity criteria.
Specifically, the cosine similarity for β_1 is defined as
|cos_1|=max{|β̂_1^⊤β_1|/β̂_1^⊤, |β̂_1^⊤β_2|/β̂_1^⊤},
which is the absolute value of the cosine of the closest true direction to β̂_1^⊤. Similarly, the cosine similarity for β_2 is defined as
|cos_2|=max{|β̂_2^⊤β_2|/β̂_2^⊤, |β̂_2^⊤β_1|/β̂_2^⊤}.
Larger values of |cos_1| and |cos_2| indicate a better estimator B̂.
The computer program written in R <cit.> for the implementation of the proposed PSRFR method is provided in Appendix . The PHD and IHT methods are implemented in the R packages dr <cit.> and itdr <cit.>, respectively.
The averages and standard deviations (SDs) of
R, |cos_1| and |cos_2| for the proposed PSRFR method, the PHD method, and the IHT method based on 1000 simulations are reported in Table <ref>.
From the results in Table <ref>, the performance of the methods considered here improved with the increase in sample size.
We observe that the PSRFR method can identify the whole subspace more accurately and estimate each direction well compared with the PHD and IHT methods, and the IHT is more accurate at recognizing the first direction.
In addition to the PHD and IHT methods, we further consider comparing the PSRFR method to the SIR, SAVE, and IRE methods under three different models with normally distributed predictor variables studied in Example 3 of <cit.>, <cit.>, and Example 3 of <cit.>:
* Model [N3]
Y=(4+β_1^⊤X)· (β_2^⊤X+2)+σε.
* Model [N4]
Y=β_1^⊤X/{0.5+(β_2^⊤X+3)^2}+σε.
* Model [N5]
Y=( β_1^⊤X)^2+ | β_2^⊤X | +σε.
We consider σ=0.5, the aforementioned settings for X, σ, β_1, and β_2, and the number of slices to be H = 10.
The SIR, SAVE, and IRE methods are also implemented in the R package dr <cit.>. Table 2 reports the averages and standard deviations of the trace correlation R based on 1000 simulations.
From Table <ref>, once again, the performance of the methods considered here improved with the increase in sample size. The PSRFR method performs well in almost all the settings considered here, with different variances of each predictor component. Compared to the PHD and IHT methods, the PSRFR underperforms under model [N3], especially for sample size n = 100, since model [N3] satisfies the assumptions for the PHD method is developed based on the normality of the predictor variables, and the IHT method is based on the PHD under the elliptically distributed predictor variables assumption.
The PSRFR method performs better than the other methods considered here under models [N4] and [N5], which indicates that the PSRFR method is an effective method for estimating Span(B) in the multivariate normal case with different variances of the predictor variables.
§.§.§ Non-normal distributions
In this subsection, we consider the predictor variables following different non-normal multivariate elliptical distributions. Specifically, the multivariate Student's t and multivariate power exponential distributions are considered:
Multivariate Student's t distribution <cit.>:
A p-dimensional random vector X is said to be distributed as a multivariate Student's t distribution with degrees of freedom ν, mean vector μ, and positive-definite symmetric matrix Σ if its joint probability density function is given by
f_t(X) = Γ((ν+p)/2)/(πν)^p/2Γ(ν/2)|Σ|^1/2
×[1+1/ν(X-μ)^⊤Σ^-1(X-μ)]^-(ν+p)/2, X∈ℝ^p.
As ν→∞, the limiting form is the multivariate normal distribution. Hence, multivariate Student's t distribution with small degrees of freedom deviates significantly from multivariate normal distribution, especially in the tail areas. In the special case of ν =1, the multivariate Student's t distribution is a multivariate Cauchy distribution. Notice that the variance-covariance matrix of the multivariate Student's t distribution is given by ν/(ν-2)Σ for ν>2. Hence, for multivariate Student's t distribution with degrees of freedom 1 and 2, the variance-covariance matrix does not exist.
Multivariate power exponential distribution <cit.>:
A p-dimensional random vector X is said to be distributed as a multivariate power exponential distribution with kurtosis parameters β>0, mean vector μ and positive-definite symmetric matrix Σ if its joint probability density function is given by
f_PE(X) = pΓ(p/2)/Γ(1+p/2β)π^p/22^1+p/2β|Σ|^1/2
×[-1/2((X-μ)^⊤Σ^-1(X-μ))^β], X∈ℝ^p.
In particular, the multivariate Laplace and multivariate normal distributions are special cases of the multivariate power exponential distribution when the kurtosis parameter β=0.5 and β=1, respectively. Therefore, the kurtosis parameter β can be viewed as the disparity between power exponential distribution and normal distribution.
In the simulation study, we consider the predictor variables following the multivariate Student's t distributions with degrees of freedom ν = 2 and 3, the multivariate Cauchy distribution (i.e., multivariate Student's t distributions with degree of freedom ν = 1), and the multivariate power exponential distribution with kurtosis parameters β = 0.5 and 5. We utilize the R packages mvtnorm <cit.> and LaplacesDemon <cit.> to simulate random vectors from the multivariate Student's t and the multivariate power exponential distributions, respectively.
The following four models under non-normal elliptical distributed predictor variables are considered:
* Model [NN1]:
Y=(4+β_1^⊤X)+ (β_2^⊤X+2)·σε^2.
This model is from Example 2 of <cit.>.
* Model [NN2]:
Y=(|4+β_1^⊤X|)^1/2·(|β_2^⊤X+2|)^1/2 +σε.
This model is based on model [NN1] with a slow-growing power function of degree 1/2.
* Model [NN3]:
Y=(|β_1^⊤X|)^1/2 +(|β_2^⊤X·ε|)^1/2 +σε.
This model is also based on model [NN1] with a slow-growing power function of degree 1/2.
* Model [NN4]:
Y=0.4·(β_1^⊤X)+3·sin ( β_2^⊤X/4 )+σε.
This model is motivated by Example 1 of <cit.>.
Following the settings in Section <ref>, we consider dim(X) = 10, σ=0.5, ε∼N(0,1), β_1=(1,0,…,0) and β_2=(0,1,…,0). We set μ = 0 and Σ = Σ_ellp is a diagonal matrix with elements (1, 6, 11, 16, 21, 26, 31, 36, 41, 46) in the multivariate Student's t and power exponential distributions.
The averages and standard deviations of the trace correlation R for different methods based on 1000 simulations are reported in Tables <ref>–<ref> for models [NN1] – [NN4], respectively.
The results in Tables <ref>–<ref> show that the PSRFR method outperforms other methods in most of the models and settings when the predictor variables follow a non-normal elliptical distribution, especially when the distribution has heavier tails compared to the multivariate normal distribution (i.e., the multivariate Student's t distribution with small degree of freedom ν, and the multivariate power exponential distribution with large kurtosis parameter β).
The PHD and SAVE methods exhibit comparable performance to the PSRFR method when the predictor variables follow the multivariate power exponential distribution with kurtosis parameter β = 0.5 since the multivariate power exponential distribution with small kurtosis parameter behaves similarly to multivariate normal distribution.
§.§ Non-elliptical Distribution for Robust Analysis
In this subsection, we perform a simulation study to investigate whether the PSRFR method can effectively identify Span(B) when the predictor variables do not follow an elliptical distribution. Under the non-elliptical distribution situations, we consider comparing the proposed PSRFR method to the MAVE and the OPG methods proposed by <cit.> for identifying the central mean subspace. The MAVE and the OPG methods require a differentiable link function but have no strict distributional assumptions for the predictor variables. The OPG and MAVE methods are implemented in the R package MAVE <cit.>.
Before considering the non-elliptical distribution situations, we compare the
OPG and MAVE methods to the PSRFR methods under model [NN1] and [NN4] when the predictor variables follow an elliptical distribution. Specifically, in Table <ref>, we present the averages and standard deviations of the trace correlations based on 1000 simulations for PSRFR, OPG, and MAVE methods when the predictor variables follow a multivariate normal distribution or multivariate Student's t distribution with degrees of freedom ν = 3 with the setting described in Sections <ref> and <ref>.
The results in Table <ref> show that the PSRFR method performs better than the OPG and MAVE methods when the predictor variables follow an elliptical distribution.
For the non-elliptical distribution situation, the predictor variables X are generated from a mixture of multivariate normal and multivariate uniform distribution in (-3, 3) with mixture probabilities 0.8 and 0.2, respectively, i.e.,
X∼ 0.8N_10(0, Σ_norm)+0.2 U_10(-3,3),
where U_10(-3,3) denotes a 10-dimensional multivariate uniform distribution in (-3, 3), each component of which independently follows uniform distribution from -3 to 3. The following three models are considered:
* Model [NE1]:
Y=β_1^⊤X/{0.5+(β_2^⊤X+1.5)^2}+σε.
* Model [NE2]:
Y=β_1^⊤X·(β_2^⊤X+1) + σε.
* Model [NE3]:
Y=0.4·(β_1^⊤X)+3·sin ( β_1^⊤XX^⊤β_2/4 )+σε.
Following the settings in Sections <ref> and <ref>, we consider σ=0.5, ε∼N(0,1), β_1=(1,0,…,0), β_2=(0,1,…,0) and
Σ_norm is a diagonal matrix with elements (1, …, 10).
The averages and standard deviations of the trace correlations based on 1000 simulations for PSRFR, OPG, and MAVE methods when the predictor variables follow the distribution in Eq. (<ref>) are presented in Table <ref>. From Table <ref>, the PSRFR performs about 10% worse relative to the OPG and MAVE method in terms of the trace correlation when n = 100, and the differences decrease
to less than 5% when the sample size increases to n = 500.
Although the OPG and MAVE methods perform better than the PSRFR method under the non-elliptical distribution situations, these simulation results demonstrate that the proposed PSRFR method is robust to the underlying distribution of the predictor variables.
The proposed PSRFR method
can effectively identify central subspaces, even the distribution of the predictor variables deviations from the elliptical distribution.
§.§ Effect of the dimension of predictor variables
In the simulation studies presented in Sections <ref> and <ref>, the dimension of predictor variables is considered as p = (X) = 10 by following the classical works on SDR. In this subsection, we examine the performance of the proposed PSRFR method when the dimension of predictor variables p is larger than 10.
In this simulation study, we consider p=30 and 40 with
β_1=(1,0,0,…,0) and β_2=(0,1,0,…,0) under model [N4] with multivariate normal distributed predictor variables and under model [NN3] with multivariate Cauchy distribution, where the matrix
Σ_norm is a diagonal matrix with elements (1, 1, 1, 2, 2, 2, …,
10, 10, 10) for p = 30 and (1, 1, 1, 1, 2, 2, 2, 2, …, 10, 10, 10, 10) for p = 40, and the matrix and Σ_ellp
is a diagonal matrix with elements (1, 1, 1, 6, 6, 6, 11, 11, 11, …,
46, 46, 46) for p = 30 and (1, 1, 1, 1, 6, 6, 6, 6, …,
46, 46, 46, 46) for p = 40.
We compare the performance of the proposed PSRFR method with the SIR and IHT methods when p = 30 and 40. Table <ref> presents the averages and standard deviations of the trace correlation R for PSRFR, SIR, and IHT methods based on 1000 simulations.
From Table <ref>, we observe that the performances of PSRFR, SIR, and IHT methods depreciate when the dimension of X increases and the sample size decreases. This observation is consistent with the intuitive realization. Moreover, we observe that the PSRFR method still performs reasonably well (R close to or greater than 0.7 in most cases) with the dimension of X being 30 and seems less sensitive to the increase in dimensionality when compared to the SIR and IHT methods.
§.§ Effect of the general basis vectors and different noise levels
In the simulation studies presented in Sections <ref> ,<ref> and <ref>, the number of basis vectors is considered as two and the noise level is 0.5. In this subsection, we examine the performance of the proposed PSRFR method when the basis vector becomes general under different noise levels.
In this simulation study, we consider σ=2 and σ=4 with
β_1=(1/√(2),1/√(2),0,
…,0), β_2=(1/√(2),-1/√(2),0,…,0), β_3=(0,0,1/√(2),1/√(2), 0,…,0) and β_4=(0,0,1/√(2),-1/√(2),0,…,0) under following model with 10-dimensional multivariate normal distributed predictor variables in Sections <ref>.
* Model:
Y=sin(β_1^⊤X+4)+exp(β_2^⊤X )+(β_3^⊤X)^2+|β_4^⊤X| +σε.
We compare the performance of the proposed PSRFR method with the PHD, SIR, SAVE, IHT, OPG and MAVE methods. Table <ref> presents the averages and standard deviations of the trace correlation R based on 1000 simulations.
From Table <ref>, we observe that the more general basis vectors with different noise levels do not have much effect on the performance of the PSRFR method.
In summary, as demonstrated by the simulation results in Sections <ref>–<ref>, the proposed PSRFR method exhibits promising performance, particularly evident when the variances of individual components diverge, and the tails of predictor variable distributions become heavier. Notably, PSRFR maintains its robustness and reliability even in scenarios where predictor variable distributions deviate from the elliptical distribution assumption and for large dimensions of the predictor variables. This versatility renders PSRFR applicable to a broader spectrum of real-world data analyses, enhancing its practical utility.
§ REAL DATA ANALYSIS
The Wine Quality dataset presented in <cit.> is a publicly available data set that contains two sub-datasets: the red wine and white wine data sets.
The response variable is the quality of wine (QL), and the 11 predictor variables are: fixed acidity (FA); volatile acidity (VA); citric acid (CA); residual sugar (RS); chlorides (CL); free sulfur dioxide (FSD); total sulfur dioxide (TSD); density (DS); pH value (PH); sulphates (SP); and alcohol level (AH). <cit.> studied the relative importance of 11 predictor variables for wine quality using the support vector machine approach and pointed out that a regression approach on the two sub-datasets can be used. Here, we consider the regression approach and apply the proposed PSRFR method to obtain the relative importance with the first 1599 and 800 observations in the red and white wine sub-datasets, respectively.
And we compare to the result obtained by the support vector machine approach.
First, we assess the normality of the 11 predictor variables using the hypothesis testing approach and graphical approach, namely the Shapiro-Wilk test and normal quantile-quantile (Q-Q) plot, after standardizing the data.
Table <ref> presents the Shapiro-Wilk statistics for predictor variables in the red wine and white wine data sets. The normal Q-Q plots are depicted in Figure <ref> and <ref> for the red wine and white wine data sets, respectively. From Table <ref> and the normal Q-Q plots in Figure <ref> and <ref>, except for variables TSD and PH in the white wine data set, all the other variables in both data sets are not likely to follow a normal distribution.
To assess the symmetry of the distributions of these 11 predictor variables in the red wine and white wine data sets, we provide the comparative boxplots in Figure <ref> and <ref>. From Figure <ref> and <ref>, we observe that
the variables in the red wine and white wine data sets are asymmetric and have heavy-tailed characteristics.
Considering that the PSRFR method performs well when the variances of the predictors are significantly different from each other, we diagonalize the data by performing an eigen-decomposition on the sample covariance matrix and multiply the centered data by the corresponding eigenvectors. Then, we apply the PSRFR method to address the regression problem at hand. Once the PSRFR estimator is obtained, the next step involves determining the dimension of the central subspace. We consider evaluating the proportion that an eigenvalue accounts for all the eigenvalues to determine the dimension of the central space, which is similar to the way of determining the number of principal components in PCA. The results show that the proportions of the first eigenvalues in the red wine and white wine data sets account for all the eigenvalues
are 0.9995 and 0.9997, respectively, which indicate that the dimension of central space can be considered as 1.
Next, our objective is to estimate the basis β̂_1 of the central space. To account for relative importance, we take the absolute value of each component in β̂_1 and arrange them in descending order, from largest to smallest. In the red wine data set, the relative importance in descending order is AH, PH, SP, DS, FSD, TSD, CL, RS, CA, VA, and FA. In the white wine data set, the relative importance in descending order is AH, SP, TSD, DS, CL, FSD, RS, PH, CA, FA, VA. These results align with the findings in <cit.>, in which PH, SP, and AH are also identified as important variables in the red wine data set, while RS and CA were relatively unimportant. In the white wine data set, SP and AH are relatively important, while FA and PH are relatively unimportant.
We observe that both SP and AH are highly important variables in both red wine and white wine data sets. <cit.> provided a physiological explanation about this and suggested that an increase in sulfates may be associated with fermenting nutrition, which plays a crucial role in enhancing the wine aroma. The significance of AH in wine is evident. Furthermore, the importance of pH value (PH) in red wine surpasses that in white wine. Although SP and AH are consistently identified as important variables, it is noteworthy that SP holds the highest importance in <cit.>, and AH emerges as the most important variable in our findings. <cit.> suggests that an increase in alcohol content tends to result in higher-quality wine.
Furthermore, considering that AH holds the highest significance in our results and DS is influenced by the proportion of AH and sugar content, it can be inferred that DS may have a greater importance than initially suggested in the study by <cit.>.
§ CONCLUDING REMARKS
In this paper, we propose a principal square response forward regression (PSRFR) method, a novel approach for dimension reduction tailored for high-dimensional, elliptically distributed data. Drawing inspiration from the Ordinary Least Squares (OLS) method, PSRFR is devised to handle the complexities of such datasets effectively.
The core principle of PSRFR lies in leveraging the amalgamated information from both predictor and response variables, which typically concentrates around a central subspace. Unlike the OLS method, which tends to recover only a single dimension, PSRFR aims at capturing a comprehensive estimate of this central subspace. The PSRFR method achieves this by minimizing the distance between data points and the central subspace, thus identifying multiple central subspace directions, surpassing the capabilities of methods like PHD and IHT methods when the predictor variables follow an elliptical distribution. Moreover, this paper presents a fundamental theorem affirming the efficacy of PSRFR in achieving substantial dimension reduction. Additionally, we provide a simple algorithm for implementing PSRFR. We delve into the asymptotic behavior of the PSRFR estimator, elucidating its convergence rate in high-dimensional scenarios.
Our simulation results underscore the superiority of PSRFR in enhancing estimation accuracy for elliptically distributed data with varying component variances. This improvement is facilitated through a simple data transformation process. Overall, the proposed PSRFR method furnishes invaluable tools for dimension reduction and analysis of high-dimensional, elliptically distributed data, exhibiting resilience even in the face of deviations from the elliptical distribution.
Note that determining the dimension of the central subspace is a critical issue, which we leave for further study.
§.§ Competing interests
The authors declare there are no conflict of interests.
§.§ Funding
The research is supported by the National Natural Science Foundation of China (No. 12371263).
§ APPENDIX
§.§ Proof of Lemma <ref>
For B=(β_1,…,β_k), without loss of generality, we consider E(X) = 0 (otherwise, we can consider transforming X by X - E(X)).
Case 1: Σ_X=_p. In this case, we have
E(YX) =E{E(YX|X)}
=E{X·E(Y|X)}
=E{h(β_1^⊤X,⋯,β_k^⊤X)·X}.
Let C be the p× p orthogonal matrix with the first k rows be β_i^⊤, i∈{1,…,k}.
Then, define
C =(β_1,…,β_k,α_k+1,…,α_p)^⊤≡(B, A)^⊤,
and let
W =CX=
(β_1^⊤X,⋯,β_k^⊤X, α_k+1^⊤X,⋯,α_p^⊤X)^⊤
=(w_1,⋯,w_k,w_k+1,⋯,w_p)^⊤.
Hence,
E(YX) =E[h(β_1^⊤X,⋯,β_k^⊤X)·X]
=C^⊤ E[h(β_1^⊤X,…,β_k^⊤X)· CX]
=C^⊤ E[h(w_1,…,w_k)·W].
Note that W = CX also follows elliptical distributions with with mean E(W) = 0 and variance-covariance matrix C_pC^⊤=_p, where _p is a p-dimensional identity matrix. By Theorem 7 in <cit.>, we can obtain
E(w_j| w_1, …,w_k)=0, j∈{k+1,…,p},
hence,
E {h(w_1,…,w_k) … w_j}=E{E ( h(w_1,…,w_k) · w_j| w_1,…, w_k ) }
=E{ h(w_1,…,w_k) ·E(w_j| w_1, …,w_k) }=0, j∈{k+1,…,p}.
Then,
E (YX)=C^⊤E{h(w_1,…,w_k)·W}
=C^⊤[E{h(w_1,…,w_k)· w_1},…,E{h(w_1,…,w_k)· w_k},0,…,0]^⊤
=∑_i=1^kβ_iE{h(w_1,…,w_k)· w_i}
=∑_i=1^k_pβ_iE{h(w_1,…,w_k)· w_i}=Σ_XBΛ,
where Λ=(λ_1,…,λ_k)^⊤,
λ_i=E{h(w_1,…,w_k)· w_i}.
Case 2: Σ_X=Σ. Let X^*=Σ^-1/2X, then X^* follows elliptical distribution with mean 0 and variance-covariance matrix _p. According to Eq. (<ref>), we have
E(Y|X)=E(Y|Σ^1/2X^*)=h(X^*^⊤Σ^1/2β_1,⋯,X^*^⊤Σ^1/2β_k).
Hence,
E(YX)=Σ^1/2E(YX^*)
=Σ^1/2Σ_X^*Σ^1/2BΛ= Σ_XBΛ.
§.§ Proof of Theorem <ref>
Without loss of generality, we consider E(X)=0.
Case 1: Σ_X=_p. In this case,
E(Y^2|X)=H(β_1^⊤X,…,β_k^⊤X),
then,
E(Y^2XX^⊤) =E{E(Y^2XX^⊤|X) }
=E{XX^⊤E(Y^2|X) }=E{XX^⊤ H(β_1^⊤X,…,β_k^⊤X) }.
Similarly, let C be the orthogonal matrix with the first k rows as β_i^⊤, i∈{1,…,k}, then we define
C=(β_1,…,β_k,α_k+1,…,α_p)^⊤≡(B, A)^⊤,
and hence,
E(Y^2XX^⊤) =E{H(β_1^⊤X, …,β_k^⊤X)·XX^⊤}
=C^⊤E{H(β_1^⊤X,…,β_k^⊤X)·CXX^⊤C^⊤}C
=C^⊤E{H(w_1,…,w_k)·WW^⊤}C.
Note that W=CX also follows elliptical distributions with with mean E(W)=0 and variance-covariance matrix CI_pC^⊤=_p. Let
W=( w_1,…,w_k,w_k+1,…,w_p)^⊤=(W_(1)^⊤,W_(2)^⊤)^⊤.
By Corollary 8 in <cit.>, we can obtain
E ( W_(2)W_(2)^⊤|W_(1) )
=diag{E ( w_k+1^2|W_(1) ),…,E ( w_p^2|W_(1) )}I_p-k
=E ( w_k+1^2|W_(1) )I_p-k
because {w_k+1,…,w_p} have the same status. Therefore,
E(Y^2XX^⊤) =C^⊤ E[H(w_1,…,w_k)·WW^⊤]C
=C^⊤ E[ E [ H(w_1,…,w_k)·WW^⊤|W_(1)] ]C
=C^⊤ E [ H(w_1,…,w_k)· E(WW^⊤|W_(1)) ]C
=C^⊤[ Γ_1 0; 0 Γ_2 ]C
=BΓ _1B^⊤ +AΓ _2A^⊤ ,
where
Γ_1=[ E{ H(w_1,…,w_k )w_1^2} ⋯ E{ H(w_1,…,w_k) w_1w_k}; ⋮ ⋱ ⋮; E{ H(w_1,…,w_k) w_kw_1} ⋯ E{ H(w_1,…,w_k) w_k^2} ]_k× k,
Γ_2 =[ E{E (H· w_k+1^2|W_(1) ) } ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ E{E (H· w_p^2|W_(1) ) } ]_(p-k)× (p-k)
=E{E (H(w_1,…,w_k)· w_k+1^2|W_(1) )}I_p-k
=E{ H(w_1,…,w_k)· w_k+1^2}I_p-k
=E{E(Y^2|X)· w_k+1^2}I_p-k
=E(Y^2· w_k+1^2)I_p-k=E(G)I_p-k.
Next, note that
E{Y^2(β^⊤X)^2}=β^⊤E(Y^2XX^⊤)β=β^⊤BΓ_1B^⊤β is the eigenvalue of Γ_1,
and E{Y^2(α^⊤X)^2} is the eigenvalue of Γ_2. Assumption <ref> assures that the eigenvalue of Γ_1 is larger, hence, the first k eigenvectors corresponding to the first k eigenvalues of E(Y^2XX^⊤) are the basis of S_E(Y|X).
Furthermore, E(Y^2XX^⊤) can be rewritten as
E(Y^2XX^⊤) =BΓ _1B^⊤+AΓ _2A^⊤
=BΓ _1B^⊤ +E(G)AA^⊤ +E(G)BB^⊤
-E(G)BB^⊤
=B{Γ _1-E(G)_k}B^⊤ +E(G)_p,
and we can obtain
E(Y^2XX^⊤)-E(G)_p=B{Γ _1-E(G)_k}B^⊤≡BM.
Because Γ _1-E(G)_k is a positive definite matrix by Assumption <ref>,
rank(M)=k and rank(BM)=rank(B) according to the results in Section A4.4 of <cit.>, i.e., E(Y^2XX^⊤)-E(G)_p is contained in the linear subspace Spanned by the basis matrix B.
Case 2: Σ_X=Σ. Let X^*=Σ^-1/2X, where X^* follows elliptical distribution with mean 0 and variance-covariance matrix _p, then
E(Y^2|X)=E(Y^2|Σ^1/2X^*)=H(X^*^⊤Σ^1/2β_1,⋯,X^*^⊤Σ^1/2β_k),
and
E(Y^2XX^⊤ ) =Σ^1/2E(Y^2X^*X^*^⊤)Σ^1/2
=Σ^1/2Σ^1/2BΓ _1B^⊤Σ^1/2Σ^1/2+Σ^1/2Σ^1/2AΓ _1A^⊤Σ^1/2Σ^1/2
=ΣBΓ _1B^⊤Σ+ΣAΓ _1A^⊤Σ
=Σ_XB{Γ _1-E(G)_k}B^⊤Σ_X+Σ_XE(G)_p.
We can obtain
E(Y^2XX^⊤ )-E(G)_pΣ_X=Σ_XB{Γ _1-E(G)_k}B^⊤Σ_X.
Similarily, the eigenvalue of Γ_1 is larger and Γ _1-E(G)_k is a positive definite matrix under Assumption <ref>. Therefore, the first k eigenvectors corresponding to the first k eigenvalues of E(ZZ^⊤ ) are the basis of Span() and
Σ_X^-1E(Y^2XX^⊤ )Σ_X^-1-Σ_X^-1E(G)_p is contained in the linear subspace Spanned by the basis matrix B.
When X∼N_p(μ,Σ),
without loss of generality,
we can obtain E( W_(2)W_(2)^⊤|W_(1)) = I_p-k by transforming X^*=Σ^-1/2X, then
E [E{H(w_1,…,w_k)· w_k+1^2|W_(1)}]
=E{ H(w_1,…,w_k) E ( w_k+1^2|W_(1) )}=E(Y^2).
Furthermore, let Ỹ^2=Y^2-E(Y^2), then E(Ỹ^2)=0 and
Σ^-1E(Ỹ^2XX^⊤ )Σ^-1= BΓ _1B^⊤≡B.
This means that the non-zero k eigenvectors corresponding to the non-zero k eigenvalues of Σ^-1E(Ỹ^2XX^⊤ )Σ^-1 are the basis of Span(B).
§.§ Proof of Theorem <ref>
By the Law of Large Number, we have
1n∑_i=1^nY_i(X_i-X̅)=1n∑_i=1^nY_i(X_i-μ)+Y̅(μ-X̅)Pr⟶E(YX)=Σ_XBΛ.
By Lemma <ref> and S_nPr⟶Σ_X, we have
S_n^-11n∑_i=1^nY_i(X_i-X̅)Pr⟶Σ_X^-1Σ_XBΛ=BΛ.
Let
Z_i=S_n^-1{Y_i(X_i-X̅)},
then,
Ẑ=1n∑_i=1^nZ_iPr⟶BΛ
with convergence rate of n^1/2, and
𝒵̂=1n∑_i=1^nZ_iZ_i^⊤Pr⟶E(ZZ^⊤)
with convergence rate n^1/2.
From Theorem <ref>, the first k eigenvectors corresponding to the first k eigenvalues of E(ZZ^⊤ ) are the basis of Span(B). Consequently, the first k eigenvectors corresponding to the first k eigenvalues of 𝒵̂, β̂_1, …,β̂_k, converge to the corresponding rotational basis for Span(B) with convergence rate of n^1/2.
Since the elements of vec(𝒵̂) are moment estimators of the elements of ZZ^⊤, by the central limit theorem, we have
√(n) [ vec(𝒵̂)- vec{E(ZZ^⊤)} ]
converges in distribution to a multivariate normal random vector with mean vector 0 and variance-covariance matrix Var{vec(ZZ^⊤ ) }. Here, we derive the specific form of Var{vec(ZZ^⊤ ) }. Using the techniques in the proof of Theorem <ref> in Appendix A.2, we can obtain the fourth conditional moment of Y given X as
E(Y^4|X)=U(β_1^⊤X,…,β_k^⊤X).
Given E(Y^2XX^⊤ ), the essence of obtaining the variance-covariance matrix is calculating the fourth moment. For notation convenience, we use the formation of the Kronecker product, which contains the fourth moments. Following the definition of W and the result in the proof of Theorem <ref>, we have
Var{vec(ZZ^⊤) }=Var{ Y^2vec(XX^⊤) }
=E{ Y^2vec(XX^⊤)vec(XX^⊤)^⊤ Y^2}
-E{ Y^2vec(XX^⊤)}×E{ Y^2vec(XX^⊤)}^⊤
=E[ E{ Y^4vec(XX^⊤)vec(XX^⊤)^⊤|X}]
-E[ E{ Y^2vec(XX^⊤)|X}]×E[ E{ Y^2vec(XX^⊤)|X}]^⊤
∝E{(XX^⊤)⊗(XX^⊤) U(β_1^⊤X,…,β_k^⊤X) }
-vec{E(Y^2XX^⊤)}×vec{E(Y^2XX^⊤)}^⊤ ,
where ∝ denotes the dimensional inequality and ⊗ denotes the Kronecker product. As the Kronecker product can incorporate all the elements of the variance-covariance matrix of the random matrix ZZ^⊤, it does not affect the convergence of elements even though the dimensions are different.
We can obtain E(Y^2XX^⊤) from the proof of Theorem <ref> in Appendix . Then, we require the following:
E { Y^4(XX^⊤) ⊗(XX^⊤)}
=E{(XX^⊤) ⊗(XX^⊤) U(β_1^⊤X,…,β_k^⊤X) }
=E{(C^⊤WW^⊤C)⊗(C^⊤WW^⊤C) U(β_1^⊤X,…,β_k^⊤X) }
=E[ U(w_1,…,w_k)E{(C^⊤WW^⊤C)⊗(C^⊤WW^⊤C)|W_(1)}]
=E{ U(w_1,…,w_k)(C^⊤⊗C^⊤)E(WW^⊤⊗WW^⊤|W_(1))(C⊗C) }
=(C^⊤⊗C^⊤)E{ U(w_1,…,w_k)E(WW^⊤⊗WW^⊤|W_(1)) }(C⊗C).
As a result, we have E( w_iw_jw_sw_t|W_(1)), where i,j,s,t∈{1,…,p}. Considering Σ_X=_p, the form of the elements in E(WW^⊤⊗WW^⊤|W_(1)) can be expressed as
E ( w_iw_jw_sw_t|W_(1) )={
w_iw_jw_sw_t, i,j,s,t ∈{1,…,k},
w_iw_jE ( w_sw_t|W_(1) ), i,j ∈{1,…,k}, s=t ∈{k+1,…,p},
w_iE ( w_jw_sw_t|W_(1) ), i ∈{1,…,k},
j=s=t ∈{k+1,…,p},
E ( w_iw_jw_sw_t|W_(1) ), i=j=s=t ∈{k+1,…,p},
0, otherwise.
.
Hence, E{ Y^4(XX^⊤) ⊗(XX^⊤)}, and the same technique can be applied when Σ_X=Σ.
§.§ R code for the proposed PSRFR
PSRFR <- function(X, y, r) {
n <- nrow(X) ## Number of observations
dim <- ncol(X) ## Dimensionality of X
## Create data matrix combining X and y
data <- cbind(X, t(y))
## Mean calculations
my <- mean(y)
mx <- colMeans(X) ## Vector of means for each column in X
## Centering matrix X
mxx <- matrix(mx, nrow = n, ncol = dim, byrow = TRUE)
z <- (X - mxx) * data[, dim + 1] ## Computing z
## Covariance of X and related calculations
sigmax <- Cov(X)
invSigmax <- solve(sigmax)
K <- invSigmax %*% (t(z) %*% z / n) %*% invSigmax
## Eigenvalue decomposition
Kv <- eigen(K)
## Return a list of eigenvalues and eigenvectors
return(list(values = Kv$values, vectors = Kv$vectors[,1:r]))
}
apalike
|
http://arxiv.org/abs/2409.03649v1 | 20240905160811 | On a combinatorial description of the Gorenstein index for varieties with torus action | [
"Philipp Iber",
"Eva Reinert",
"Milena Wrobel"
] | math.AG | [
"math.AG",
"14M25, 14J45, 52B20, 14L30"
] |
=1
calc
3d
patterns,matrix,arrows,shapes
C>c<L>l<R>r<theoremTheorem[section]
introthmTheoremlemma[theorem]Lemmaproposition[theorem]Propositioncorollary[theorem]Corollarycorollaryofproof[theorem]Corollary of proofconjecture[theorem]Conjecturedefinition*introintroconstrConstructiondefinition[theorem]Definitionconstruction[theorem]Construction*introdefDefinitionexample[theorem]Exampleremark[theorem]Remarksetting[theorem]Settingreminder[theorem]Reminderproblem[theorem]Problemnotation[theorem]NotationToDo[theorem]ToDointroexExampleequationtheoremℍ𝔾𝕃𝔽ℂ𝕂𝕋ℤℝ𝕊ℕℚℙ𝕏𝒜ℬ𝒞ℰℱ𝒢𝒦ℳ𝒪ℛ#1⟨ #1 ⟩1//reg
7045On a combinatorial description of the Gorenstein Index]On a combinatorial description of the Gorenstein Index for varieties with torus actionPhilipp Iber, Eva Reinert, Milena Wrobel]Philipp Iber, Eva Reinert, Milena WrobelInstitut für Mathematik, Universität Oldenburg,
26111 Oldenburg, [email protected] für Mathematik, Universität Oldenburg,
26111 Oldenburg, [email protected] für Mathematik, Universität Oldenburg,
26111 Oldenburg, [email protected][2010]14M25, 14J45, 52B20, 14L30abbrv§ ABSTRACT
The anticanonical complex is a combinatorial tool that was invented to extend the features of the Fano polytope from toric geometry to wider classes of varieties.
In this note we show that the Gorenstein index of Fano varieties with torus action of complexity one (and even more general of the so-called general arrangement varieties) can be read off its anticanonical complex in terms of lattice distances in full analogy to the toric Fano polytope. As an application we give concrete bounds on the defining data
of almost homogeneous Fano threefolds of Picard number one having a reductive automorphism group with two-dimensional maximal torus depending on their Gorenstein index.
[
[
=====
§ INTRODUCTION
The main objective of this article is to contribute to the development of combinatorial methods for the study of geometric properties of Fano varieties.
The model case is toric geometry. Here we have the well-known one-to-one correspondence between toric Fano varieties X and the so-called Fano polytopesA_X.
These polytopes allow to describe several algebraic and geometric invariants of the corresponding toric varieties in a purely combinatorial manner. One of these invariants is the Gorenstein index, that is the smallest positive integer ι_X such that ι_X-times the canonical divisor 𝒦_X of X is Cartier. In the toric case, this invariant is encoded in the lattice distances of the facets of the Fano polytope, see <cit.>. This fact has been used by several authors to contribute to the classification of toric Fano varieties of low Gorenstein index; see con, kasNil,BatBo,Ath,BrunsRoemer,BaNill.
The purpose of this note is to generalize this combinatorial Gorenstein criterion
to Fano varieties X with an effective action of an algebraic torus of higher complexity, where the latter means that the difference dim(X) - dim() is greater or equal to one. More precisely we consider Fano general arrangement varieties as introduced in <cit.>. These are varieties
X coming with a torus action of arbitrary complexity c that gives rise to a specific rational quotient X ^c,
the so called maximal orbit quotient, whose critical values form a general hyperplane arrangement.
Note that this class comprises i.a. all Fano varieties with torus action of complexity one as well as all toric varieties.
By realizing the Fano general arrangement varieties X as subvarieties of toric varieties Z, we
can replace the toric Fano polytope with a polyhedral complex, the so-called anticanonical complex𝒜_X, which is supported on the tropical variety of X ⊆ Z.
[tdplot_main_coords,
edge/.style=,
leaf0/.style=,
leaf1/.style=,
leaf2/.style=,
leaf0e/.style=,
leaf1e/.style=,
leaf2e/.style=,
le/.style=,
axis/.style=,
scale=.45]
(o) at (0,0,0);
(leaf0m) at (-,-,);
(leaf0p) at (-,-,);
(leaf0vm) at (,,);
(leaf0vp) at (,,);
(linm) at (0,0,);
(linp) at (0,0,);
(leaf1m) at (,0,);
(leaf1p) at (,0,);
(leaf1vm) at (-,0,);
(leaf1vp) at (-,0,);
(leaf2m) at (0,,);
(leaf2p) at (0,,);
(leaf2vm) at (0,-,);
(leaf2vp) at (0,-,);
(v01) at (-1,-1,-2);
(v02) at (-2,-2,-1);
(v1) at (2,0,1);
(v2) at (0,3,2);
(e1) at (0,0,2);
(e2) at (0,0,-1);
[edge,leaf2e] (o) – (v2);
[edge,leaf2e] (v2) – (e1);
[edge,leaf2e] (v2) – (e2);
[leaf2e,opacity=.2] (e1) – (e2) – (v2) – cycle;
[leaf2e] (v2) circle (2pt);
[edge,leaf1e] (o) – (v1);
[edge,leaf1e] (v1) – (e1);
[edge,leaf1e] (v1) – (e2);
[leaf1e,opacity=.2] (e1) – (e2) – (v1) – cycle;
[] (v1) circle (2pt);
[edge,leaf0e] (o) – (v01);
[edge,leaf0e] (o) – (v02);
[edge,leaf0e] (v02) – (e1);
[edge,leaf0e] (v01) – (e2);
[edge,leaf0e] (v01) – (v02);
[leaf0e,opacity=.2] (e1) – (o) – (v02) – cycle;
[leaf0e,opacity=.2] (v01) – (o) – (v02) – cycle;
[leaf0e,opacity=.2] (e2) – (o) – (v01) – cycle;
[] (v01) circle (2pt);
[] (v02) circle (2pt);
[edge] (o) – (e1);
[edge] (o) – (e2);
[] (o) circle (2pt);
[] (e1) circle (2pt);
[] (e2) circle (2pt);
[] (linm) node𝒜_X for X=V(T_0^2T_1 + T_2^2 + T_3^3)⊆_5,8,9,6;
The anticanonical complex was
introduced in <cit.> for varieties with torus action of complexity one and later generalized i.a. to the general arrangement case in hw19,hmw and has so far been successfully used
for the classification of singular Fano varieties; see abhw,bhhn, bh, hmw, h_quadrics, hw19.
Our main result states that for Fano general arrangement varieties the Gorenstein index can be read off the anticanonical complex
in full analogy to the toric Fano polytope case:
Let X ⊆ Z be a Fano general arrangement variety with anticanonical complex 𝒜_X. Then the Gorenstein index ι_X of X equals the least common multiple of the lattice distances of the maximal cells in the boundary of 𝒜_X:ι_X := lcm(d(0, F); F ∈∂𝒜_X).
As an application of our result
we consider the -factorial rational almost homogeneous Fano varieties of Picard number one with reductive automorphism group having a maximal torus of dimension two that were described in <cit.>. Recall that these varieties are uniquely determined up to isomorphy by their divisor class group graded Cox ring and their anticanonical class, see <ref>. For fixed Gorenstein index we give concrete bounds on these defining data of the varieties, see Propositions <ref>, <ref>, <ref>, <ref> and <ref>.
This allows in particular a computer aided search for varieties of small Gorenstein index.
For illustration we list all -factorial rational almost homogeneous Fano varieties of Picard number one with reductive automorphism group having a maximal torus of dimension two and Gorenstein index smaller or equal to three here:
Every three-dimensional -factorial Fano variety of Picard number one with reductive automorphism group having a maximal torus of dimension two and Gorenstein index ι_X smaller or equal to three is isomorphic to one of the following varieties X, specified by their Cl(X) graded Cox ring ℛ(X), a matrix [w_1…,w_r] of generator degrees and their anticanonical classes -𝒦_X as follows:
gorindvarcccccc
No.
(X) (X) [w_1,…,w_r] -_X ι_X
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4 T_5 [[ 1 1 1 1 1 ]] [[ 3 ]] 1
[T_1,T_2,T_3,T_4,T_5]/T_1 T_2+T_3 T_4+T_5^2 ×_3 [[ 1 1 1 1 1; 1 2 2 1 0 ]] [[ 3; 0 ]] 1
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^3 [[ 3 3 3 2 1 ]] [[ 6 ]] 1
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^4 ×_2 [[ 2 2 2 1 1; 1 1 1 1 0 ]] [[ 4; 0 ]] 1
[T_1,T_2,T_3,T_4,T_5]/T_1 T_2+T_3 T_4+T_5^3 ×_4 [[ 1 2 2 1 1; 1 3 3 1 0 ]] [[ 4; 0 ]] 2
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^2 T_5^2 ×_2 [[ 2 2 2 1 1; 1 1 1 0 0 ]] [[ 4; 1 ]] 2
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^2 T_5^2 ×_4 [[ 2 2 2 1 1; 3 3 3 1 0 ]] [[ 4; 0 ]] 2
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^3 T_5^5 [[ 4 4 4 1 1 ]] [[ 6 ]] 2
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^3 T_5^7 [[ 8 8 8 3 1 ]] [[ 12 ]] 2
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4 T_5^3 [[ 4 4 4 5 1 ]] [[ 10 ]] 2
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4 T_5^3 ×_2 [[ 2 2 2 1 1; 1 1 1 0 0 ]] [[ 4; 1 ]] 2
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4 T_5^5 [[ 8 8 8 1 3 ]] [[ 12 ]] 2
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4 T_5^7 ×_3 [[ 4 4 4 1 1; 2 2 2 1 0 ]] [[ 6; 0 ]] 2
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^6 ×_2 [[ 3 3 3 1 2; 1 1 1 0 1 ]] [[ 6; 0 ]] 2
[T_1,T_2,T_3,T_4,T_5]/T_1 T_2+T_3 T_4+T_5^5 ×_3 [[ 2 3 3 2 1; 1 2 2 1 0 ]] [[ 6; 0 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1 T_2+T_3 T_4+T_5^4 ×_5 [[ 1 3 3 1 1; 1 4 4 1 0 ]] [[ 5; 0 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1 T_2+T_3 T_4+T_5^2 ×_3 [[ 1 3 3 1 2; 0 1 1 0 2 ]] [[ 6; 0 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1 T_2+T_3 T_4+T_5^2 ×_3 [[ 3 1 1 3 2; 2 0 0 2 1 ]] [[ 6; 0 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1 T_2+T_3 T_4+T_5^2 ×_9 [[ 1 1 1 1 1; 1 8 8 1 0 ]] [[ 3; 0 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^2 T_5^7 [[ 15 15 15 1 4 ]] [[ 20 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^2 T_5^8 ×_2 [[ 9 9 9 1 2; 1 1 1 0 1 ]] [[ 12; 0 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^2 T_5^10 ×_4 [[ 6 6 6 1 1; 3 3 3 1 0 ]] [[ 8; 0 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^3 T_5^9 [[ 6 6 6 1 1 ]] [[ 8 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^3 T_5^12 [[ 9 9 9 2 1 ]] [[ 12 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^3 T_5^18 [[ 15 15 15 4 1 ]] [[ 20 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^3 T_5^3 [[ 3 3 3 1 1 ]] [[ 5 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^3 T_5^3 [[ 9 9 9 1 5 ]] [[ 15 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^3 T_5^3 ×_5 [[ 3 3 3 1 1; 2 2 2 3 0 ]] [[ 5; 0 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^4 T_5^7 [[ 9 9 9 1 2 ]] [[ 12 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^4 T_5^8 ×_2 [[ 6 6 6 1 1; 1 1 1 1 0 ]] [[ 8; 0 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^4 T_5^10 ×_2 [[ 9 9 9 2 1; 1 1 1 1 0 ]] [[ 12; 0 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^5 T_5^7 [[ 6 6 6 1 1 ]] [[ 8 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^5 T_5^8 [[ 9 9 9 2 1 ]] [[ 12 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^5 T_5^10 [[ 15 15 15 4 1 ]] [[ 20 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4 T_5^4 ×_3 [[ 3 3 3 2 1; 2 2 2 1 0 ]] [[ 6; 0 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4 T_5^2 [[ 3 3 3 4 1 ]] [[ 8 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4 T_5^5 [[ 6 6 6 7 1 ]] [[ 14 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^5 [[ 5 5 5 2 3 ]] [[ 10 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^8 ×_2 [[ 4 4 4 1 3; 1 1 1 0 1 ]] [[ 8; 0 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^9 [[ 9 9 9 2 1 ]] [[ 12 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^12 ×_2 [[ 6 6 6 1 1; 1 1 1 1 0 ]] [[ 8; 0 ]] 3
[T_1,T_2,T_3,T_4,T_5]/T_1^2+T_2 T_3+T_4^18 ×_2 [[ 9 9 9 1 2; 1 1 1 0 1 ]] [[ 12; 0 ]] 3
§ BACKGROUND ON GENERAL ARRANGEMENT VARIETIES
We assume the reader to be familiar with the foundations of toric geometry; see cls, ful for introductory texts. In this section we recall the necessary facts and notions on general arrangement varieties.
This class of varieties has been introduced in <cit.> and can be obtained by using the constructive description of varieties with torus action provided there: The construction is based on a result of <cit.> relating the Cox ringℛ(X) := ⊕_[D] ∈Cl(X)Γ(X, 𝒪_X(D))
of a variety X with torus action to that of a suitable rational quotient X Y, the so-called maximal orbit quotient. It provides us with the Cox ring and an associated embedding of X into a toric variety Z_X.
Specializing the procedure to the case that Y is the projective or the affine line, one retrieves the Cox ring based approach to rational varieties with torus action of complexity one developed in hh, hhs, hw_cpl1. The class of general arrangement varieties introduced in <cit.> can then be seen as a controlled step leaving the case of complexity one: These are varieties X with torus action and Y = ^n such that the critical values of the maximal orbit quotient X Y form a general hyperplane arrangement.
We briefly recall the construction of graded rings R(A,P) that are defined by a pair of matrices and which turn out to be the Cox rings of general arrangement varieties; compare <cit.>:
Fix integers r ≥ c > 0
and n_0, …, n_r > 0
as well as m ≥ 0.
Set n := n_0 + … + n_r.
For every i = 0, …, r fix a tuple l_i ∈_>0^n_i and define a monomial
T_i^l_i :=
T_i1^l_i1⋯ T_in_i^l_in_i ∈ [T_ij,S_k; 0 ≤ i ≤ r, 1 ≤ j ≤ n_i, 1 ≤ k ≤ m].
We will also write [T_ij, S_k] for the above polynomial ring.
Let A :=(a_0, … a_r) be a a (c+1) × (r+1) matrix over
such that any c+1 of its columns
a_0, …, a_r are linearly independent.
For every t = 1, …, r-c,
we obtain a polynomial
g_t
:= [
[ a_0 … a_c a_c+t; T_0^l_0 … T_c^l_c T_c+t^l_c+t ]]
∈ [T_ij,S_k].
In the next step, we construct a grading on the factor ring
[T_ij, S_k]/g_1, …, g_r-c.
We build up
an integral (r+s) × (n+m)-matrix P from an
r × (n+m)-matrix matrix P_0
built from the tuples of positive integers
l_i, where i = 0, …, r and a s × (n+m)-matrix D
as follows
P:=
[
[ P_0; D ]]
:= [
[ -l_0 l_1 0 0 … 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; -l_0 0 l_r 0 … 0; ; D ]],
whereby we require the columns of the matrix P to be pairwise different, primitive and generate ^r+s as a vector space.
Now, let e_ij∈^n
and e_k ∈^m denote the
canonical basis vectors
and consider the projection
Q ^n+m →
K := ^n+m / (P^*)
onto the factor group
by the row lattice of P.
Then the
K-graded -algebra
associated with (A,P)
is defined as
R(A,P)
:= [T_ij,S_k] / g_1,…,g_r-c,
(T_ij) := Q(e_ij),
(S_k) := Q(e_k).
We note that the rings R(A,P) can be directly read off the matrices A and P. They are integral normal complete intersections.
From a ring R(A,P) as above, we obtain general arrangement varieties X together with an embedding X ⊆ Z into a toric variety Z via the following construction:
Let R(A,P) be as above. The generators T_ij,S_k of R(A,P) give rise to an embedding
X̅ := (R(A,P))[r, hook]
Z̅:=^n+m.
Fix any fan Σ in ^r+s having the columns of P as its primitive ray generators and denote by Z the toric variety with defining fan Σ.
Consider the linear map P^n+m→^r+s defined by P, set
Σ̂:={σ≼γ; P(σ)∈Σ}, where γ⊆^n+m denotes the positive orthant, and denote by Ẑ the corresponding toric variety. Then we obtain a commutative diagram
X̅∩Ẑ[r, hook][d,"p"]
Ẑ[d,"p"]
X(A,P,Σ)[r, hook]
Z
where p denotes the toric morphism corresponding to the linear map P
and X:=X(A,P,Σ)
is the closure of p(X̅∩^n+m)
inside Z.
By construction, the variety X is invariant under the subtorus action ^s⊆^r+s of the acting torus of Z.
The varieties X:=X(A,P,Σ)⊆ Z
are normal varieties with
dimension, invertible functions, divisor class group and Cox ring given in terms of their defining data by:
(X) = s + c,
Γ(X,𝒪^*) = ^*,
(X) = K,
ℛ(X) = R(A,P).
The torus action of ^s on X is effective and of complexity c, i.e. the general torus orbit is of codimension c.
Let X:=X(A,P,Σ) ⊆ Z arise from Construction <ref>. Then we call X ⊆ Z an explicit general arrangement variety. Moreover we call any -variety that is equivariantly isomorphic to an explicit general arrangement variety a general arrangement variety.
Let R(A,P) be as above and assume that the columns of P generate ^r+s as a cone. Let γ denote the positive orthant _≥0^n+m. We define a polyhedral cone
Mov(R(A,P)) := ⋂_γ_0≼γ facet Q(γ_0) ⊆ K_.
Then any projective explicit general arrangement variety is of the form X(A, P, Σ(u)), where Σ(u) is constructed as follows: Let u ∈Mov(R(A,P))^∘ and set
Σ(u) := {P(γ_0^*); γ_0 ≼γ, u ∈ Q(γ_0)^∘}, where γ_0^* := cone(e_i; e_i ∉γ_0).
In particular, up to isomorphy, a projective general arrangement variety X can be regained from its Cl(X)-graded Cox ring ℛ(X) and an ample class u ∈Cl(X)
in the above way. Moreover, if X is Fano, we may choose u = - 𝒦_X.
Let X:=X(A,P,Σ) ⊆ Z be an explicit general arrangement variety. We note that in Construction <ref> we may successively remove all maximal cones σ∈Σ whose corresponding orbit does not intersect X, that is,
X ∩^r+s· z_σ = ∅,
where z_σ denotes the common limit point for t → 0 of all one-parameter subgroups t ↦ (t^v_1, …, t^v_r+s) of the acting torus ^r+s on Z with v ∈^r+s taken from the relative interior σ^∘⊆σ.
We end up with a minimal fan Σ still defining the same general arrangement variety X.
We call the toric variety corresponding to this minimal fan the minimal ambient toric variety of X and denote it with Z_X.
We end this chapter by a close investigation of the structure of the fan Σ of the minimal ambient toric variety Z_X of a general arrangement variety X.
Let us briefly recall the basic notions on tropical varieties.
Let Z be a toric variety with acting torus .
For a closed subvariety X⊆ Z intersecting the torus non trivially consider the vanishing ideal I(X ∩) in the Laurent polynomial ring (T). For every f ∈ I(X∩) let |Σ(f)| denote the support of the codimension one skeleton of the normal quasifan of its Newton polytope.
Then the tropical variety (X) ofX is defined as follows,
see <cit.>:
(X) := ⋂_f ∈ I(X ∩) |Σ(f)| ⊆^(Z).
The following result of Tevelev then gives a criterion on which orbits of the toric variety Z are intersected by the embedded variety X in terms of the tropical variety trop(X), see <cit.>:
Let X⊆ Z be a closed embedding.
Then X intersects the torus orbit T · z_σ corresponding to the cone σ∈Σ non-trivially if and only if the relative interior σ^∘ intersects the tropical variety (X) non-trivially.
Using this criterion, we obtain that the cones occurring in the fan corresponding to the minimal ambient toric variety of an explicit general arrangement variety X ⊆ Z_X
are as follows:
Let X(A, P, Σ)⊆ Z_X be an explicit general arrangement variety of complexity c and denote with Σ_^r the fan corresponding to the toric variety ^r. Then we have
|trop(X)| = |Σ^≤ c__r|×^s,
where Σ_^r^≤ c:={σ∈Σ_^r; (σ)≤ c}
We endow trop(X) with the following quasifan structure:
Denote by e_1,…,e_r+s the canonical basis of ^r+s and set e_0:=-∑_i=1^r e_i.
For any subset I⊆{0,…,r} with 0 ≤ |I| ≤ c we set
λ_I:=(e_i; i∈ I) + (e_r+1,…,
e_r+s).
Then we have λ_I⊆(X) and these cones define quasifan structure on trop(X). More precisely we have
trop(X) = Σ^≤ c__r×^s = {σ×^s; σ∈Σ^≤ c__r} = {λ_I; I ⊆{0, …, r}, 0 ≤ |I| ≤ c}.
The cones λ_I, where 1≤ |I| =: k are called the k-leaves of (X).
Moreover, we have the lineality space of (X):
λ_lin:= λ_∅ = ⋂λ_I = lin(e_r+1, …, e_r+s).
Using this quasifan structure on trop(X), we can distinguish between two types of cones that occur in the defining fan Σ of the minimal ambient toric variety Z_X of X:
A cone σ∈Σ is either a leaf cone, that means, σ⊆λ_I holds for a leaf λ_i ∈trop(X), or σ∈Σ is a big cone, that means
σ∩λ_i^∘≠∅
holds for all 1-leaves λ_i of (X).
Moreover, we call a big cone elementary big, if for every 0 ≤ i ≤ r there exists precisely one ray ϱ_i of σ with ϱ_i ⊆λ_i.
Let X:=X(A,P, Σ) ⊆ Z_X be an explicit general arrangement variety.
Let σ∈Σ be a cone with σ⊈λ_lin. Then the following statements are equivalent:
* σ is a big cone.
*
We have σ^∘∩λ_lin≠∅.
As each cone in Σ_X is either a big cone or a leaf cone, we only need to show the implication (i) ⇒ (ii).
So let σ be a big cone.
For 0 ≤ i ≤ r we set
J_i := {j ∈{1, …, n_i}; v_ij∈σ} and
J:= {k ∈{1, …, m}; v_k ∈σ}.
As σ is a big cone, none of the sets J_i is empty.
We obtain α_i := ∑_j ∈ J_i l_ij > 0 for all 0 ≤ i ≤ r and conclude
∑_i=0^r α_0/α_i∑_j ∈ J_i v_ij + ∑_k ∈ J v_k ∈σ^∘∩λ_lin.
Let X:=X(A,P, Σ) ⊆ Z_X be an affine or complete explicit general arrangement variety and let σ∈Σ be a maximal big cone, where we mean maximal in Σ with respect to inclusion. Then we have
dim(σ∩λ_lin) = dim(λ_lin).
If X is affine, then Σ consists of precisely one maximal big cone. As by construction the columns of P generate ^r+s as a vector space, we conclude that σ intersects λ_lin in full dimension.
So assume X is complete and let σ∈Σ_X be a maximal big cone. Then due to Lemma <ref>, there exists a point x ∈σ^∘∩λ_lin.
Moreover, as X is complete, we have |trop(X)| ∩ |Σ_X|= |trop(X)|, and therefore
λ _lin= ⋃_τ∈Σ
(τ∩λ_lin)
=(λ_lin) (τ∩λ_lin).
In particular there exists a cone τ∈Σ with (τ∩λ_lin)=(λ_lin) and x∈σ^∘∩τ.
As
σ∩τ≼σ
and
σ^∘∩τ≠∅ holds, we infer σ∩τ=σ and thus σ≼τ. Since σ is a maximal cone, we conclude σ =τ and hence (σ∩λ_lin)=(λ_lin).
§ THE GORENSTEIN INDEX VIA THE ANTICANONICAL COMPLEX
In this section we will describe how to read the Gorenstein index of an explicit general arrangement variety X ⊆ Z_X off its anticanonical complex. We start by shortly recalling the construction of the anticanonical complex for these varieties and some basic facts on lattice distances.
Let X(A, P, Σ) ⊆ Z_X be an explicit general arrangement variety. We consider the coarsest common refinement
Σ':= Σ⊓(X) := {σ∩τ; σ∈Σ, τ∈(X)},
where trop(X) is endowed with the quasifan structure defined in Remark <ref>.
Let φ Z' → Z be the toric morphism arising from the refinement of fans Σ' →Σ and let X' be the proper transform of X under φ. Then Z'→ Z is called a weakly tropical resolution of X and X'⊆ Z' fulfills the following conditions:
* X' ⊆ Z' is again a general arrangement variety.
* The fan Σ' consists of leaf cones.
* For any leaf cone σ∈Σ we have σ∈Σ'.
Using the results of <cit.> we obtain the following description of the anticanonical complex for general arrangement varieties:
Let X ⊆ Z_X be an explicit -Gorenstein general arrangement variety and let φ Z' → Z be its weakly tropical resolution.
For 0 ≤ i ≤ r we consider the following torus invariant divisors on Z_X:
D_Z^(i) := ∑_j=1^n_i (r-c)l_ijD_ϱ_ij - ∑_ϱ∈Σ^(1) D_ϱ.
Let σ' ∈Σ' be any cone. Then σ' is a leaf cone and there exists an index 0 ≤ i ≤ r with v_ij∉σ' for all 1 ≤ j ≤ n_i. Let u_σ'∈ M_ be any element with div(χ^u_σ') = D_Z^(i). Then the anticanonical complex of X ⊆ Z is given as
𝒜_X:= ⋃_σ' ∈Σ' A_σ'
A_σ' := σ' ∩{v ∈ N_; u_σ', v≥ -1 }.
The relative interior𝒜_X^∘ of the anticanonical complex 𝒜_X is the interior of its support with respect to the tropical variety trop(X)
and its boundary is
∂𝒜_X := 𝒜_X ∖𝒜_X^∘,
which we will assume to be endowed with the polyhedral complex structure
inherited from 𝒜_X.
In particular, a cell of the anticanonical complex 𝒜_X lies in its boundary if and only if it does not contain 0.
We consider the Fano explicit general arrangement variety X := V(T_01^2T_02 + T_11^2 + T_21^3)⊆_5,8,9,6 =:Z with Cox ring R(A,P) defined by the matrices
A := [ -1 1 0; -1 0 1 ] and
P=[v_01,v_02,v_11,v_21] = [[ -2 -1 2 0; -2 -1 0 3; -1 -2 1 2 ]].
In particular, we have
ℛ(X) = R(A,P) = [T_01,T_02,T_11,T_21] / T_01^2T_02 + T_11^2 + T_21^3
with generator degrees [w_01,w_02,w_11,w_21] = [ 5 8 9 6 ]
and the fan Σ corresponding to the minimal ambient toric variety Z_X has the following three maximal cones: σ_1 := (v_01,v_11,v_21), σ_2 := (v_02,v_11,v_21) and σ_3 := (v_01,v_02).
The vertices of the anticanonical complex can then be calculated from these data using <cit.>. These are v_01, v_02, v_11, v_21 and the points u_1 = (0,0,2) and u_2=(0,0,-1) in the lineality space λ_lin of the tropical variety trop(X).
We draw the anticanonical complex 𝒜_X and its boundary ∂𝒜_X inside the tropical variety (X):
[tdplot_main_coords,
edge/.style=thick,
leaf0/.style=,
leaf1/.style=,
leaf2/.style=,
leaf0e/.style=,
leaf1e/.style=,
leaf2e/.style=,
le/.style=,
axis/.style=,
scale=.7]
(o) at (0,0,0);
(leaf0m) at (-,-,);
(leaf0p) at (-,-,);
(leaf0vm) at (,,);
(leaf0vp) at (,,);
(linm) at (0,0,);
(linp) at (0,0,);
(leaf1m) at (,0,);
(leaf1p) at (,0,);
(leaf1vm) at (-,0,);
(leaf1vp) at (-,0,);
(leaf2m) at (0,,);
(leaf2p) at (0,,);
(leaf2vm) at (0,-,);
(leaf2vp) at (0,-,);
(v01) at (-1,-1,-2);
(v02) at (-2,-2,-1);
(v1) at (2,0,1);
(v2) at (0,3,2);
(e1) at (0,0,2);
(e2) at (0,0,-1);
[leaf2,opacity=.1] (linp)–(leaf2p)–(leaf2m)–(linm);
[edge,leaf2e] (o) – (v2);
[edge,leaf2e] (v2) – (e1);
[edge,leaf2e] (v2) – (e2);
[leaf2e,opacity=.2] (e1) – (e2) – (v2) – cycle;
[leaf2e] (v2) circle (2pt) node[anchor=south west]v_21;
[leaf1,opacity=.1] (linp)–(leaf1p)–(leaf1m)–(linm);
[edge,leaf1e] (o) – (v1);
[edge,leaf1e] (v1) – (e1);
[edge,leaf1e] (v1) – (e2);
[leaf1e,opacity=.2] (e1) – (e2) – (v1) – cycle;
[] (v1) circle (2pt) node[anchor=west]v_11;
[leaf0,opacity=.1] (linp)–(leaf0p)–(leaf0m)–(linm);
[edge,leaf0e] (o) – (v01);
[edge,leaf0e] (o) – (v02);
[edge,leaf0e] (v02) – (e1);
[edge,leaf0e] (v01) – (e2);
[edge,leaf0e] (v01) – (v02);
[leaf0e,opacity=.2] (e1) – (o) – (v02) – cycle;
[leaf0e,opacity=.2] (v01) – (o) – (v02) – cycle;
[leaf0e,opacity=.2] (e2) – (o) – (v01) – cycle;
[] (v01) circle (2pt) node[anchor=north east]v_02;
[] (v02) circle (2pt) node[anchor=east]v_01;
[edge] (o) – (e1);
[edge] (o) – (e2);
[] (o) circle (2pt);
[] (e1) circle (2pt) node[anchor=south east]u_1;
[] (e2) circle (2pt) node[anchor=north west]u_2;
[] ((linm)+(0,0,-1)) node𝒜_X;
[xshift=10cm]
(o) at (0,0,0);
(leaf0m) at (-,-,);
(leaf0p) at (-,-,);
(leaf0vm) at (,,);
(leaf0vp) at (,,);
(linm) at (0,0,);
(linp) at (0,0,);
(leaf1m) at (,0,);
(leaf1p) at (,0,);
(leaf1vm) at (-,0,);
(leaf1vp) at (-,0,);
(leaf2m) at (0,,);
(leaf2p) at (0,,);
(leaf2vm) at (0,-,);
(leaf2vp) at (0,-,);
(v01) at (-1,-1,-2);
(v02) at (-2,-2,-1);
(v1) at (2,0,1);
(v2) at (0,3,2);
(e1) at (0,0,2);
(e2) at (0,0,-1);
[leaf2,opacity=.1] (linp)–(leaf2p)–(leaf2m)–(linm);
[edge,leaf2e] (v2) – (e1);
[edge,leaf2e] (v2) – (e2);
[leaf2e,opacity=.05] (e1) – (e2) – (v2) – cycle;
[gray,anchor=south] (F6) at ((v2)!0.4!(e1)) F_6;
[gray,anchor=west] (F7) at ((v2)!0.3!(e2)) F_7;
[leaf2e] (v2) circle (2pt);
[leaf1,opacity=.15] (linp)–(leaf1p)–(leaf1m)–(linm);
[edge,leaf1e] (v1) – (e1);
[edge,leaf1e] (v1) – (e2);
[leaf1e,opacity=.05] (e1) – (e2) – (v1) – cycle;
[leaf1e,anchor=west] (F4) at ((v1)!0.7!(e1)) F_4;
[leaf1e,anchor=north west] (F5) at ((v1)!0.5!(e2)) F_5;
[] (v1) circle (2pt);
[leaf0,opacity=.1] (linp)–(leaf0p)–(leaf0m)–(linm);
[edge,leaf0e] (v02) – (e1);
[edge,leaf0e] (v01) – (e2);
[edge,leaf0e] (v01) – (v02);
[leaf0e,opacity=.05] (e1) – (v02) – (v01) – (e2) – cycle;
[leaf0e,anchor=south east] (F1) at ((v02)!0.5!(e1)) F_1;
[leaf0e,anchor=north east] (F2) at ((v02)!0.5!(v01)) F_2;
[leaf0e,anchor=north west] (F2) at ((v01)!0.5!(e2)) F_3;
[] (v02) circle (2pt);
[] (v01) circle (2pt);
[] (o) circle (2pt);
[] (e1) circle (2pt);
[] (e2) circle (2pt);
[] ((linm)+(0,0,-1)) node∂𝒜_X;
The maximal cells of ∂𝒜_X are the line segments F_1 = (u_1,v_01), F_2 = (v_01,v_02), F_3 = (u_2,v_02), F_4 = (u_1,v_11), F_5 = (u_2,v_11), F_6 = (u_1,v_21) and F_7 = (u_2,v_21).
Now let us turn to lattices distances. A
lattice subspace is an affine subspace A ⊆^n such that dim(A) = rk(A ∩^n).
Note that any affine subspace A ⊆^n that contains an element of ^n is a lattice subspace.
A lattice hyperplane is a lattice subspace of codimension 1.
The lattice distanced(x,A) between a point x ∈^n and a lattice subspace A ⊆^n is the number of lattice hyperplanes H in the affine hull aff(A ∪{x}) lying between x and A, i.e.
d(x,A) := |{H ⊆aff(A ∪{x}); [ H lattice hyperplane with x ∉ H; and H ∩conv(A ∪{x}) ≠∅ ]}|.
It is well known that the lattice distance of a lattice hyperplane H ⊆^d and a point x ∈^d can be calculated as follows: We have
d(x, H) = |u_H,v - u_H,x|,
where u_H is a primitive normal of H and v is any point on H.
The lattice distance does not depend on unimodular transformations. For a convex set B ⊆^n with aff(B) a lattice subspace, we set
d(x,B) := d(x, aff(B)).
Theorem <ref> is a direct consequence of the following proposition:
Let X ⊆ Z_X be an affine or complete explicit -Gorenstein general arrangement variety with anticanonical complex 𝒜_X. Then the Gorenstein index ι_X of X equals the least common multiple of the lattice distances of the maximal cells in the boundary of 𝒜_X:
ι_X = lcm(d(0, F); F ∈∂𝒜_X).
[Example <ref> continued]
We calculate the Gorenstein index of the variety X = V(T_01^2T_02 + T_11^2 + T_21^3) ⊆_5,8,9,6 as described in Example <ref>
using the above Proposition <ref>:
We have
[ d(0, F_1) = 4, d(0,F_2) = 3, d(0, F_3) = 1, d(0, F_4) = 4,; d(0,F_5) = 1, d(0, F_6) = 2, d(0,F_7) = 1, ]
and obtain
ι_X = lcm(d(0,F_i); i = 1, …, 7) = 12.
Let X ⊆ Z_X be a -Gorenstein general arrangement variety. Then
c_σ:= min{m ∈_>0; [ there exists u ∈^r+s with m· u ∈^r+s; and div(χ^u)|_Z_σ = D_Z^(i)|_Z_σ ]}
does not depend on the choice of i ∈{0, …, r} and
the Gorenstein index ι_X of X equals lcm(c_σ; σ∈Σ).
We recall that the pullback homomorphism Cl(Z_X) →Cl(X) is an isomorphism on the level of divisor class groups as well as on the level of Picard groups, see <cit.>. In particular, as X is -Gorenstein, each of the (linear equivalent) divisors D_Z^(i) is -Cartier on Z_X and their Cartier index equals the Gorenstein index of X.
As Z_X is toric, for each σ∈Σ we have
D_Z^(i)|_Z_σ = div(χ^u)|_Z_σ
for some u ∈^r+s. Therefore, using Cl(Z_X) ≅Cl(Z_X)^, we conclude that the Cartier index of D_X^(i) on Z_σ equals c_σ. In particular, c_σ does not depend on the choice of i and the Cartier index of D_Z^(i) on Z equals lcm(c_σ; σ∈Σ) as claimed.
Let H ⊆^r+s be a lattice hyperplane with 0 ∉ H.
Let e_1, …, e_r+s be the standard basis vectors and set e_0:=-∑ e_i and consider for 0 ≤ i ≤ r the lattice subspaces
H_i := H ∩λ_i, with λ_i := cone(e_i) + lin(e_r+1, …, e_r+s).
If dim(H ∩lin(e_r+1, …, e_r+s)) = s - 1 and dim(H_i) = s holds for all 0 ≤ i ≤ r, then for any subset I ⊆{0, …, r} with |I| = r, we have
d(0,H) = lcm(d(0,H_i); i ∈ I).
We exemplarily prove the case I = {1, …,r}.
Let b ∈^r+s with b,v = 1 for all v ∈ H and let m ∈_≥ 1 be the minimal element such that m · b ∈^r+s. Then m · b is a primitive normal of H and we have d(0,H) = m. Identifying lin(λ_i) with ^1+s via the projection
π_i ^r+s→^1+s, (a_1, …, a_r+s) ↦ (a_i, a_r+1, …, a_r+s)
we can regard H_i as a lattice hyperplane in ^1+s and b^(i):=(b_i, b_r+1, …, b_r+s) fulfills b^(i),v =1 for all v∈ H_i. In particular, for the minimial m^(i)∈_≥ 1 with m^(i)· b^(i)∈^1+s we have d(0, H_i) = m^(i). Due to the structure of the b^(i), we conclude
d(0,H) = m = lcm(m^(i); 1 ≤ i ≤ r) = lcm(d(0,H_i); 1 ≤ i ≤ r).
We will make frequent use of the following straightforward statement about lattice distances of lattice subspaces:
Let 0 ∉ A ⊆ M_ be a lattices subspace. Then for every lattice subspace 0 ∉ A' containing A we have d(0,A)| d(0,A') and in particular d(0,A) ≤ d(0,A'). Moreover, we have
d(0,A) = min{d(0,H); 0∉ H ⊆ M_ lattice hyperplane with A ⊆ H}.
By replacing M_ with lin(A'), it suffices to show the first assertion for lattice hyperplanes.
Applying a suitable unimodular transformation we may futhermore assume M = ^n+m and aff(A ∪{0})= ^n ⊆^n+m.
In particular, there exists a unique primitive normal u_A ∈^n of A with d(0,A) = u_A,v for any v ∈ A. Now let 0 ∉ H ⊆^n+m be any hyperplane containing A. Then there is a primitive normal of H of the form u_H = (λ· u_A, u) for some u ∈^n-s and λ∈_> 0. We conclude
u_A,v|u_H, v for any v ∈ A ⊆ H and thus d(0,A) | d(0,H). This shows the first assertion.
Moreover, the hyperplane 0 ∉ H with A⊆ H and primitive normal (u_A,0, …, 0) ∈^n fulfills d(0,A) = d(0,H) and we obtain the desired equality.
Let σ∈Σ be any cone and let u ∈^r+s such that div(χ^u)|_Z_σ = D_Z^(i)|_Z_σ holds. In a first step we show that the lattice distance d(0, B_σ^(i)) with
B_σ^(i) := σ∩{v ∈ N_; u, v = -1}
equals c_σ as defined in Lemma <ref>.
By construction, the hyperplanes H with normal u ∈^r+s fulfilling div(χ^u)|_Z_σ = D_Z^(i)|_Z_σ and u,v = -1 for all v ∈ H are precisely the hyperplanes containing B_σ.
Moreover, for these hyperplanes H we have d(0,H) = m, where m ∈_>0 is the minimal integer with m · u ∈^r+s.
Using Lemma <ref> we conclude d(0,B_σ^(i)) = c_σ as claimed.
To complete the proof, we note that, in the notation of Construction <ref>, the cells of the anticanonical complex 𝒜_X that lie in its boundary are the polyhedra
C_σ' := σ' ∩{v ∈ N_; u_σ', v = -1}.
In particular, we are left with showing
that
d(B_σ^(i), 0) = lcm(d(C_σ', 0); σ' ∈Σ' with σ' ⊆σ)
holds for every σ' ∈Σ'.
As C_σ'⊆ B_σ^(i) for some 0 ≤ i ≤ r holds for all σ'⊆σ and d(0,B_σ^(i)) does not depend on the choice of i, we obtain ≥ using Lemma <ref>. For the inequality ≤ we distinguish between the two types of cones occurring in Σ. So, let σ∈Σ be a leaf cone. Then σ is not affected by the weakly tropical resolution, that means we have σ∈Σ'.
We conclude d(0,B_σ^(i)) = d(0,C_σ) = lcm(d(C_σ', 0); σ' ∈Σ' with σ' ⊆σ).
Now let σ∈Σ be a big cone. As d(0,B_σ^(i)) does not depend on the choice of i,
using Lemma <ref> it suffices to prove
d(0,B_σ^(i)) = lcm(d(0, C_σ(j)); σ(j) := λ_j ∩σ for j ∈{0, …, r} with j ≠ i).
for a maximal big cone σ.
In this situation, by construction of the anticanonical complex, see <ref>, we have C_σ(j) = B_σ^(i)∩λ_j.
Using Proposition <ref>, we obtain dim(B_σ^(i)∩λ_lin) = s-1 and as σ is a big cone we have dim(C_σ(j)) = dim(B_σ^(i)∩λ_j) = s. In particular we can apply Lemma <ref> which proves the claim.
§ APPLICATIONS
In this section we apply our results to almost homogeneous Fano varieties X, where almost homogeneous means that the automorphism group of X has an open orbit in X. On the basis of the classification of all -factorial rational almost homogeneous Fano varieties with reductive automorphism group having a maximal torus of dimension two obtained in <cit.>, we give concrete bounds on the defining data depending on the Gorenstein index, see Propositions <ref>, <ref>, <ref>, <ref> and <ref>. This enables us to filter the varieties for those of small Picard number, see Corollary <ref> for the cases of Gorenstein index one, two and three.
Any almost homogeneous -factorial Fano threefold of Picard number one with reductive automorphism group having a maximal torus of dimension two is either the variety No. <ref> from Corollary <ref>, which is of Gorenstein index one, or arises up to isomorphy from one of the Settings <ref>, <ref>, <ref>, <ref> and <ref>, where we list the defining matrices A and P, the fan Σ of the minimal ambient toric variety Z_X and the vertices of the anticanonical complex:
We have A := [ -1 1 0; -1 0 1 ] and
P=[v_01,v_02, v_11, v_12, v_21] =
[ -1 -1 1 1 0; -1 -1 0 0 l_21; -1 0 0 1 0; 0 0 0 d_12 d_21 ]
where l_21 > 1, d_12 > 2 and - d_21/d_12-1< l_21 < -d_21 and the maximal cones of the fan Σ corresponding to the minimal ambient toric variety are given as
[ σ_1 := (v_01,v_02,v_11,v_21), σ_2 := (v_01,v_02,v_12,v_21),; σ_3 := (v_01,v_11,v_12,v_21), σ_4 := (v_02,v_11,v_12,v_21), ]
each of these is a big cone. Moreover, Σ contains the four elementary big cones, σ_1∩σ_3, σ_1∩σ_4, σ_2∩σ_3 and σ_2∩σ_4.
The vertices of the anticanonical complex can then be calculated from these data using <cit.>. These are given as the columns of P together with the following points in the lineality space of the tropical variety trop(X):
[ v_σ_1∩σ_3' =(0, 0, -l_21/1+l_21, d_21/1+l_21), v_σ_1∩σ_4' =(0, 0, 0, d_21/1+l_21),; v_σ_2∩σ_3' =(0, 0, 0, d_12 l_21+d_21/1+l_21), v_σ_2∩σ_4' =(0, 0,
l_21/1+l_21, d_12 l_21+d_21/1+l_21) ]
Let X be a Fano variety arising from Setting <ref> and denote by ι_X its Gorenstein index. Then
we have 2<d_12≤ 3ι_X and -ι_X ≤ k< 0 such that
(k d_12+ι_X ) l_21+ι_X |ι_X k^2 d_12 and k d_21 = ι_X (l_21+1).
In particular, for fixed Gorenstein index there are finitely many varieties arising via this setting.
Due to the structure of the defining fan Σ of Z_X we obtain that (v_21,v_σ_2∩σ_3',v_σ_2∩σ_4',0) is a cell in its anticanonical complex 𝒜_X, and using Proposition <ref> we obtain
d(aff(v_21, v_σ_2∩σ_3',v_σ_2∩σ_4'),0)
= (d_12 l_21+d_21/(d_12 l_21+d_21,1+l_21), d_12 l_21+d_21/(d_12 l_21+d_21,d_12-d_21)) |ι_X.
In particular, this implies d_12 l_21+d_21|ι_X (1+l_21) and thus
d_12 l_21+d_21≤ι_X (1+l_21).
Similarly, since (v_21,v_σ_1∩σ_3',v_σ_1∩σ_4',0)∈𝒜_X, we see that
d(aff(v_21,v_σ_1∩σ_3',v_σ_1∩σ_4'),0) = d_21/(d_21,l_21+1)|ι_X.
In particular, there exists some k∈ such that
k d_21 = ι_X (l_21+1).
Note that because of l_21 < -d_21, we have -ι_X ≤ k < 0. Inserting this into (<ref>) yields
d_12≤ι_X (1+l_21) - d_21/l_21≤2ι_X (1+l_21)/l_21≤ 3ι_X.
We notice that - d_21/d_12-1< l_21 and the identity (<ref>) ensure that -kd_12≠ι_X: If otherwise -kd_12 = ι_X, then -d_21/d_12-1 = -ι_X (l_21+1)/k (d_12-1) = d_12/d_12-1(l_21+1)>l_21.
Once again, we consider d_12 l_21+d_21|ι_X (1+l_21). Using (<ref>) we infer
(k d_12+ι_X ) l_21+ι_X | k ι_X l_21+k ι_X
Since (k d_12+ι_X )≠ 0 as seen above, we have
k ι_X l_21+k ι_X | (k ι_X l_21+k ι_X)(k d_12+ι_X ).
Therefore,
(k d_12+ι_X ) l_21+ι_X | (k ι_X l_21+k ι_X)(k d_12+ι_X ) - k ι_X ((k d_12+ι_X ) l_21+ι_X)
= ι_X k^2 d_12.
Thus, for fixed ι_X there are finitely many possibilities for d_12 and k. For each of these there are only finitely many possibilities for l_21 and thus also for d_21 by Equation (<ref>).
We have A := [ -1 1 0; -1 0 1 ] and
P=[v_01, v_11, v_12, v_21, v_22] =
[ -2 1 1 0 0; -2 0 0 l_21 l_22; -1 0 1 0 0; d_01 0 0 d_21 d_22 ]
where l_21, l_22 > 1, 2d_22 > -d_01l_22, -2d_21 > d_01l_21. and the maximal cones of the fan Σ corresponding to the minimal ambient toric variety are given as
[ σ_1 := (v_01,v_11,v_12,v_21), σ_2 := (v_01,v_11,v_12,v_22),; σ_3 := (v_01,v_11,v_21,v_22), σ_4 := (v_01,v_12,v_21,v_22), ]
each of these is a big cone. Moreover, Σ contains the four elementary big cones, σ_1∩σ_3, σ_1∩σ_4, σ_2∩σ_3 and σ_2∩σ_4.
The vertices of the anticanonical complex can then be calculated from these data using <cit.>. These are given as the columns of P together with the following points in the lineality space of the tropical variety trop(X):
[ v_σ_1∩σ_3' = (0, 0, -l_21/2+l_21,
d_01 l_21+2 d_21/2+l_21), v_σ_1∩σ_4' =(0, 0,
l_21/2+l_21,
d_01 l_21+2 d_21/2+l_21) ,; v_σ_2∩σ_3' =(0, 0,
-l_22/2+l_22,
d_01 l_22+2 d_22/2+l_22) , v_σ_2∩σ_4' =(0, 0,
l_22/2+l_22,
d_01 l_22+2 d_22/2+l_22) ]
Let X be a Fano variety arising from Setting <ref> and denote by ι_X its Gorenstein index. Then we end up in one of the following cases:
* d_01=0, l_21=l_22|ι_X, 2|ι_X, d_21| (2+l_21)ι_X/2 and d_22| (2+l_22)ι_X/2.
* d_01=0, l_21>l_22, 2|ι_X, 1<l_22<ι_X, 0<d_22<ι_X and 1≤ k <ι_X such that
d_21/(d_21,d_22)|ι_X/2+k and k(l_21 d_22-d_21 l_22)=ι_X(d_22-d_21),
* d_01=-1, 1<l_21<4ι_X, 0<s, s|ι_X(l_21+2), 0<k≤ι_X and 0<t such that
t | 2ι_X^2s+2ksι_X and k(tl_21+sl_22)=2ι_X(t+s),
where d_21=l_21-s/2 and d_22=l_22+t/2; or the same with (s,l_21) and (t,l_22) interchanged.
In particular, for fixed Gorenstein index there are finitely many varieties arising via this setting.
By suitable subtracting the first row from the last one, we can reach d_01∈{0,-1}. We start with d_01=0. In this case the conditions change to l_21, l_22 > 1, d_22 > 0 and d_21 < 0. Let l_21≥ l_22 without loss of generality due to admissible operations.
Due to the structure of the defining fan Σ of Z_X we obtain that (v_21, v_22,v_σ_1∩σ_4',v_σ_2∩σ_4',0) is a cell in 𝒜_X, and using Proposition <ref> we obtain
d(aff(v_21, v_22,v_σ_1∩σ_4',v_σ_2∩σ_4'),0) = l_22 d_21-d_22 l_21/(l_22 d_21-d_22 l_21,d_21-d_22)=l_21=l_22|ι_X.
Hence, l_21 and l_22 are bounded. Using again the structure of the defining fan Σ of Z_X we obtain that (v_01,v_σ_1∩σ_3',v_σ_1∩σ_4',0) and (v_01,v_σ_2∩σ_3',v_σ_2∩σ_4',0) are cells of 𝒜_X, and with Proposition <ref> we obtain
d(aff(v_01,v_σ_1∩σ_3',v_σ_1∩σ_4'),0) = (2, 2d_21/(2d_21, 2+l_21)) |ι_X,
d(aff(v_01,v_σ_2∩σ_3',v_σ_2∩σ_4'),0) = (2, 2d_22/(2d_22, 2+l_22)) |ι_X.
In particular, using 2|ι_X, this leads to the desired d_21| (2+l_21)ι_X/2 and d_22| (2+l_22)ι_X/2.
Secondly, let l_21>l_22. As above, we obtain
d(aff(v_21, v_22,v_σ_1∩σ_4',v_σ_2∩σ_4'),0)
= (l_21 d_22-d_21 l_22/(l_21 d_22-d_21 l_22, l_21-l_22), l_21 d_22-d_21 l_22/(l_21 d_22-d_21 l_22,d_22-d_21)) |ι_X.
In particular, this implies l_21 d_22-d_21 l_22|ι_X(l_21-l_22) and l_21 d_22-d_21 l_22|ι_X(d_22-d_21). Thus we have
ι_X(l_21-l_22)/l_21 d_22-d_21 l_22≥ 1 and ι_X(d_22-d_21)/l_21 d_22-d_21 l_22≥ 1,
which yields
(d_22-d_21)l_22≤ (ι_X-d_22)(l_21-l_22) and d_22 (l_21-l_22) ≤ (ι_X-l_22)(d_22-d_21).
Since (d_22-d_21)l_22 >0 and d_22 (l_21-l_22)>0, we infer d_22<ι_X and l_22<ι_X by using l_21-l_22>0 and d_22-d_21>0.
Once again, we consider l_21 d_22-d_21 l_22|ι_X(d_22-d_21). In particular, there exists some k∈ with k≥ 1 such that
k(l_21 d_22-d_21 l_22)=ι_X(d_22-d_21).
Note that because of l_22, l_21>1, we have l_21 d_22-d_21 l_22>d_22-d_21 and thus 1≤ k <ι_X. Using (<ref>) we obtain
(ι_X-kl_22)d_21=(ι_X-kl_21)d_22
and thus d_21| (ι_X-kl_21)d_22. Due to the structure of the defining fan Σ of Z_X we obtain that (v_01,v_σ_1∩σ_3',v_σ_1∩σ_4',0) is a cell in 𝒜_X, and using Proposition <ref> we obtain
d(aff(v_01,v_σ_1∩σ_3',v_σ_1∩σ_4'),0) = (2, 2d_21/(2d_21, 2+l_21)) |ι_X.
In particular, this implies 2d_21|ι_X(2+l_21) and 2|ι_X. Thus, we have d_21|ι_X+ι_X/2l_21. For d̃_21:=d_21/(d_21,d_22) we infer d̃_21|ι_X+ι_X/2l_21 and d̃_21|ι_X -kl_21, and thus
d̃_21|ι_X+ι_X/2l_21-(ι_X -kl_21)=(ι_X/2+k)l_21.
Since d_21 and l_21 are coprime, we obtain d̃_21|ι_X/2+k.
Thus, for fixed ι_X there are finitely many possibilities for d_22,l_22, d̃_21 and k. For each of these there are only finitely many possibilities for d_21 and thus also for l_21 by (<ref>).
We continue with d_01=-1. In this case the conditions change to l_21, l_22 > 1, -l_22+2d_22> 0 and l_21-2d_21 > 0. This yields l_21d_22-l_22d_21>0. We set
s:= l_21-2d_21∈_≥ 1 and t:= -l_22+2d_22∈_≥ 1.
Then we have
2(l_21d_22-l_22d_21)=l_21(l_22+t)-l_22(l_21-s)=tl_21+sl_22.
Due to the structure of the defining fan Σ of Z_X we obtain that (v_01,v_σ_1∩σ_3',v_σ_2∩σ_3',0) is a cell in 𝒜_X, and using Proposition <ref> we obtain
d(aff (v_01,v_σ_1∩σ_3',v_σ_2∩σ_3'),0)
= (d_22-d_21/(l_21d_22-l_22d_21,d_22-d_21),l_21-l_22/(l_21d_22-l_22d_21,l_21-l_22)) |ι_X.
This implies l_21d_22-l_22d_21|ι_X(l_21-l_22) and l_21d_22-l_22d_21|ι_X(d_22-d_21), and thus
l_21d_22-l_22d_21|ι_X((l_21-l_22)+2(d_22-d_21))=ι_X(s+t).
In particular, this yields
tl_21+sl_22=2(l_21d_22-l_22d_21) | 2ι_X(t+s),
and because of l_21d_22-l_22d_21>0 and t+s>0 we obtain tl_21+sl_22/t+s≤ 2ι_X.
We first consider t≥ s. Then we have
l_21/2< tl_21+sl_22/t+s≤ 2ι_X,
and this yields l_21<4ι_X.
Due to the structure of the defining fan Σ of Z_X we obtain that
(v_11,v_12, v_σ_1∩σ_3',v_σ_1∩σ_4',0) is a cell in 𝒜_X, and using Proposition <ref> we obtain
d(aff(v_11,v_12,v_σ_1∩σ_3',v_σ_1∩σ_4'),0) = (s/(s,l_21+2)) |ι_X.
In particular, this implies s|ι_X(l_21+2), and l_21<4ι_X yields s< 4ι_X^2+2ι_X.
Similarly, since (v_11,v_12, v_σ_2∩σ_3',v_σ_2∩σ_4',0) ∈𝒜_X , we see that
d(aff(v_11,v_12,v_σ_2∩σ_3',v_σ_2∩σ_4'),0) = (t/(t,l_22+2)) |ι_X.
In particular, this implies t|ι_X(l_22+2). Furthermore, due to <ref> there exists some k∈ such that
k(tl_21+sl_22)=2ι_X(t+s).
Note that because of l_21, l_22>1, we have tl_21+sl_22≥ 2(s+t), and thus 0<k≤ι_X. Using (<ref>) yields
t(kl_21-2ι_X)=2ι_X s-ksl_22,
so we get t| 2ι_X s-ksl_22. Using t|ι_X(l_22+2) we obtain
t|ι_X(2ι_X s-ksl_22) +ksι_X(l_22+2)=2ι_X^2s+2ksι_X.
Thus, for fixed ι_X there are finitely many possibilities for l_21, s,t and k. For each of these there are only finitely many possibilities for l_22 due to (<ref>).
For the case where s >t, one follows the same arguments as above with (s,l_21) and (t,l_22) interchanged.
In both cases, d_21 and d_22 are obtained by (<ref>).
We have A := [ -1 1 0; -1 0 1 ] and
P=[v_01, v_11, v_12, v_21, v_22] =
[ -2 1 1 0 0; -2 0 0 1 l_22; -1 0 1 0 0; d_01 0 0 d_21 d_22 ]
where l_22 > 1, d_22 > d_21l_22 + l_22, 2d_22 > -d_01l_22, -2d_21> d_01 and the maximal cones of the fan Σ corresponding to the minimal ambient toric variety are given as
[ σ_1 := (v_01,v_11,v_12,v_21), σ_2 := (v_01,v_11,v_12,v_22),; σ_3 := (v_01,v_11,v_21,v_22), σ_4 := (v_01,v_12,v_21,v_22), ]
each of these is a big cone. Moreover, Σ contains the four elementary big cones, σ_1∩σ_3, σ_1∩σ_4, σ_2∩σ_3 and σ_2∩σ_4.
The vertices of the anticanonical complex can then be calculated from these data using <cit.>. These are given as the columns of P together with the following points in the lineality space of the tropical variety trop(X):
[ v_σ_1∩σ_3' =(0, 0, -1/3,
d_01/3+2 d_21/3), v_σ_1∩σ_4' =(0, 0, 1/3, d_01/3+2 d_21/3),; v_σ_2∩σ_3' =(0, 0,
-l_22/2+l_22,
d_01 l_22+2 d_22/2+l_22), v_σ_2∩σ_4' =(0, 0, l_22/2+l_22,
d_01 l_22+2 d_22/2+l_22) ]
Let X be a Fano variety arising from Setting <ref> and denote by ι_X its Gorenstein index. Then we have d_21=0, -3ι_X ≤ d_01<0, -3ι_X ≤ k_01<0 and 0 < k_22<ι_X such that
ι_X( 3/k_01+2/k_22)l_22-2ι_X/k_22| 6 ι_X(k_22+k_01) and d_22 k_22 = ι_X (l_22-1).
In particular, for fixed Gorenstein index there are finitely many varieties arising via this setting.
By subtracting d_21 times the second row from the last one, we can reach d_21=0. The conditions change to l_22 > 1, d_22 > l_22, 2d_22 > -d_01l_22 and 0> d_01.
Due to the structure of the defining fan Σ of Z_X we obtain that (v_01,v_σ_1∩σ_3',v_σ_1∩σ_4',0) is a cell in 𝒜_X, and using Proposition <ref> we obtain
d(aff(v_01, v_σ_1∩σ_3',v_σ_1∩σ_4'),0)= d_01/(d_01,3)|ι_X.
In particular, this implies d_01| 3ι_X and thus there exists some k_01∈ with
d_01 k_01=3ι_X.
Note that because of -3ι_X ≤ d_01<0, we have -3ι_X ≤ k_01 <0. Similarly, since (v_21,v_22,v_σ_1∩σ_4',
v_σ_2∩σ_4',0)∈𝒜_X, we see that
d(aff(v_21,v_22, v_σ_1∩σ_4',v_σ_2∩σ_4'),0) = d_22/(l_22-1,d_22)|ι_X.
In particular, there exists some k_22∈ such that
d_22 k_22 = ι_X (l_22-1).
Note that because of 1<l_22 < d_22, we have 0 < k_22 < ι_X. Similarly, since (v_01,v_σ_2∩σ_3',
v_σ_2∩σ_4',0)∈𝒜_X, we see that
d(aff(v_01, v_σ_2∩σ_3',v_σ_2∩σ_4'),0)
= (d_01 l_22+2 d_22/(-d_01+d_22,d_01 l_22+2 d_22), d_01 l_22+2 d_22/(2+l_22,d_01 l_22+2 d_22)) |ι_X.
In particular, this implies d_01 l_22+2 d_22|ι_X(2+l_22). Using (<ref>) and (<ref>) we obtain
d_01 l_22+2 d_22= ι_X( 3/k_01+2/k_22)l_22-2ι_X/k_22_=: b|ι_X(2+l_22).
Assuming 3/k_01+2/k_22=0, then we get
d_01 l_22+2 d_22= -2ι_X /k_22<0,
that contradicts d_01 l_22+2 d_22>0. Thus we have
0 ≠3/k_01+2/k_22=3k_22+2k_01/k_22 k_01.
Once again, we consider d_01 l_22+2 d_22|ι_X(2+l_22). By using (<ref>) we infer
b | ι_X l_22+2ι_X | (3k_22+2k_01)(ι_X l_22+2ι_X).
This implies
b | (3k_22+2k_01)(ι_X l_22+2ι_X)-k_22 k_01 b=6 ι_X(k_22+k_01).
Assuming k_22=-k_01, then we get
d_01 l_22+2 d_22= ι_X ( 3/k_01-2/k_01)l_22+2ι_X/k_01=ι_X(l_22+2)/k_01<0,
that contradicts d_01 l_22+2 d_22>0.
Thus, for fixed ι_X there are finitely many possibilities for d_01,k_01 and k_22. For each of these there are only finitely many possibilities for l_22 and thus also for d_22 by (<ref>) and (<ref>).
We have A := [ -1 1 0; -1 0 1 ] and
P=[v_01, v_11, v_12, v_21, v_22] =
[ -2 1 1 0 0; -2 0 0 1 l_22; -1 0 1 0 0; d_01 0 0 d_21 d_22 ]
with l_22 > 1, 2d_22 > -d_01l_22, 1 - 2d_21 > d_01.
There are at least three maximal cones in the fan Σ corresponding to the minimal ambient toric variety, which are given as
[ σ_1 := (v_01,v_11,v_12,v_22), σ_2 := (v_01,v_11,v_21,v_22),; σ_3 := (v_01,v_12,v_21,v_22). ]
Each of these is a big cone. If 2d_21+d_01≠ 0 holds, then there exists a fourth maximal cone of Σ that is a big cone, namely σ_4 := (v_01,v_11,v_12,v_21).
Moreover, Σ contains the four elementary big cones, σ_1∩σ_2, σ_1∩σ_3,τ_1 ≼σ_2, τ_2 ≼σ_3.
The vertices of the anticanonical complex can then be calculated from these data using <cit.>. These are given as the columns of P together with the following points in the lineality space of the tropical variety trop(X):
[ v_τ_1' =(0, 0, -1/3,
d_01/3+2 d_21/3), v_τ_2' =(0, 0, 1/3, d_01/3+2 d_21/3); v_σ_1∩σ_2' =(0, 0,
-l_22/2+l_22,
d_01 l_22+2 d_22/2+l_22), v_σ_1∩σ_3' =(0, 0, l_22/2+l_22,
d_01 l_22+2 d_22/2+l_22) ]
Let X be a Fano variety arising from Setting <ref> and denote by ι_X its Gorenstein index. Then we have
d_21=0, -2ι_X ≤ d_01≤ 0 and 0<k<2ι_X such that
(d_01 k+2 ι_X ) l_22/k-2 ι_X/k| 2ι_X(d_01k+3ι_X) and d_22 k=ι_X (l_22-1).
In particular, for fixed Gorenstein index there are finitely many varieties arising via this setting.
By subtracting d_21 times the second row from the last one, we can reach d_21=0. The conditions change to l_22 > 1, 2d_22 > -d_01l_22 and 1 > d_01. Due to the structure of the defining fan Σ of Z_X we obtain that
(v_21,v_22,v_τ_1',v_σ_1 ∩σ_2',0) is a cell in 𝒜_X, and using Proposition <ref> we obtain
d(aff(v_21,v_22, v_τ_1',v_σ_1∩σ_2'),0) = d_22/(l_22-1,d_22)|ι_X.
In particular, this implies d_22|ι_X(l_22-1). Because of d_22>0 and l_22>1, we have
d_22/l_22-1<ι_X.
Using 2d_22>-d_01l_22 we infer
-d_01/2·l_22/l_22-1<ι_X,
that implies -2ι_X ≤ d_01≤ 0. If d_01=0, then the last coordinates of v_τ_1', v_τ_2', v_σ_1∩σ_2' and v_σ_1∩σ_3' would all be non-negative, which contradicts 0 ∈|𝒜_X| ^∘. Thus we have d_01<0 and d_22>l_22/2.
Once again, we consider d_22|ι_X(l_22-1). Thus there exists k ∈ with
d_22 k=ι_X (l_22-1) ⇔ d_22=ι_X(l_22-1)/k.
Note that because of d_22>l_22/2 we have 0<k<2ι_X. Since (v_11,v_12,v_σ_1 ∩σ_2',v_σ_1 ∩σ_3',0) is a cell in 𝒜_X, and using Proposition <ref> we obtain
d(aff(v_11,v_12, v_σ_1∩σ_2',v_σ_1∩σ_3'),0) = d_01 l_22+2 d_22/(2+l_22,d_01 l_22+2 d_22)|ι_X.
In particular, this implies d_01 l_22+2 d_22|ι_X (2+l_22). Inserting (<ref>) into this yields
d_01 l_22+2 d_22=(d_01 k+2 ι_X ) l_22/k-2 ι_X/k_=:b|ι_X(2+l_22).
Assuming d_01k+2ι_X=0, then we get 2ι_X=-d_01k. With -2ι_X ≤ d_01<0 and 0<k<2ι_X we infer 2≤ -d_01, k<ι_X. This contradicts 2d_22 > -d_01l_22, thus we have d_01k+2ι_X≠ 0. Using this we get
b | 2ι_X+ι_X l_22| (d_01k+2ι_X)(2ι_X+ι_X l_22).
This implies
b | (d_01k+2ι_X)(2ι_X+ι_X l_22)-kι_X b=2ι_X(d_01k+3ι_X).
Assuming d_01=-3ι_X/k, then we get
d_01 l_22+2 d_22= (-3ι_X+2ι_X)l_22/k-2ι_X/k=-ι_X(l_22+2)/k<0,
that contradicts d_01 l_22+2 d_22>0.
Thus, for fixed ι_X there are finitely many possibilities for d_01 and k. For each of these there are only finitely many possibilities for l_22 and thus also for d_22 by (<ref>).
We have A := [ -1 1 0; -1 0 1 ] and
P=[v_01,v_11,v_12,v_21,v_1]=
[ -2 1 1 0 0; -2 0 0 l_21 0; -1 0 1 0 0; 1 0 0 d_21 1 ] with 1 < l_21 < -2d_21 < 2l_21 and the maximal cones of the Σ corresponding to the minimal ambient toric variety are given as
[ σ_1 := (v_01,v_11,v_21,v_1), σ_2 := (v_01,v_12,v_21,v_1),; σ_3 := (v_01,v_11,v_12,v_21), σ_4 := (v_11,v_12,v_1), ]
where σ_1,σ_2,σ_3 are big cones and σ_4 is a leaf cone. Moreover, Σ contains two elementary big cones, σ_1∩σ_3 and σ_2∩σ_3.
The vertices of the anticanonical complex can then be calculated from these data using <cit.>. These are given as the columns of P together with the following points in the lineality space of the tropical variety trop(X):
v_σ_1∩σ_3' =(0, 0, -l_21/2+l_21,
l_21+2 d_21/2+l_21)
and v_σ_2∩σ_3' = (0, 0,
l_21/2+l_21, l_21+2 d_21/2+l_21).
Let X be a Fano variety arising from Setting <ref> and denote by ι_X its Gorenstein index. Then we have -ι_X ≤ k<-ι_X/2 such that
l_21(2k+ι_X/ι_X) + 2| 4kι_X and k l_21/ι_X = d_21-1.
In particular, for fixed Gorenstein index there are finitely many varieties arising via this setting.
Due to the structure of the defining fan Σ of Z_X we obtain that (v_21,v_1,v_σ_1∩σ_3',0) is a cell in 𝒜_X, and using Proposition <ref> we obtain d(aff(v_21,v_1,v_σ_1∩σ_3'),0) = l_21/(l_21,d_21-1)|ι_X.
In particular, this implies
k l_21/ι_X = d_21-1
for some k∈. Because of l_21 < -2d_21 < 2l_21, we have -ι_X ≤ k <-ι_X/2.
Similarly, since (v_11,v_12,v_σ_1∩σ_3',v_σ_2∩σ_3',0)∈𝒜_X, we see that
d(aff(v_11,v_12,v_σ_1∩σ_3',v_σ_2∩σ_3'),0) = l_21+2 d_21/(l_21+2 d_21,l_21+2)|ι_X.
In particular, we obtain (l_21+2 d_21)|ι_X(l_21+2). Using (<ref>), we infer
(l_21 + 2k/ι_Xl_21+2)= (l_21(2k+ι_X/ι_X) + 2) |ι_X(l_21+2)
We notice that -ι_X≤ 2k+ι_X < 0. Therefore, ι_X(l_21+2)| (2k+ι_X)ι_X(l_21+2). Hence,
(l_21(2k+ι_X/ι_X) + 2) | (2k+ι_X)ι_X(l_21+2) - ι_X^2(l_21(2k+ι_X/ι_X) + 2) = 4kι_X.
Thus, for fixed ι_X there are finitely many possibilities for k and thus finitely many possibilities for l_21. |
http://arxiv.org/abs/2409.02499v1 | 20240904075035 | Twin electroweak bubble nucleation and gravitational wave under the $S_3$ symmetry of two-Higgs-doublet model | [
"Vo Quoc Phong",
"Nguyen Xuan Vinh",
"Phan Hong Khiem"
] | hep-ph | [
"hep-ph"
] |
empty
compatibility=false
|
http://arxiv.org/abs/2409.02826v1 | 20240904154532 | Automatic facial axes standardization of 3D fetal ultrasound images | [
"Antonia Alomar",
"Ricardo Rubio",
"Laura Salort",
"Gerard Albaiges",
"Antoni Payà",
"Gemma Piella",
"Federico Sukno"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
A. Alomar et al.
Department of Information and Communications Technologies, Universitat Pompeu Fabra, 122-140 Tànger, Barcelona, Spain Department of Obstetrics and Gynecology, Hospital del Mar, 25-29 Passeig Marítim, Barcelona,Spain Department of Medicine and Life Sciences, Universitat Pompeu Fabra, 88 Doctor Aiguader,Barcelona, Spain Fetal Medicine Unit, Obstetrics Service, Department of Obstetrics, Gynecology and Reproductive Medicine, University Hospital Quirón Dexeus, Barcelona, Spain
Automatic facial axes standardization of 3D fetal ultrasound images
Antonia Alomar1 Ricardo Rubio2,3Laura Salort 1 Gerard Albaiges4 Antoni Payà2,3Gemma Piella1Federico Sukno1
September 9, 2024
==============================================================================================================
§ ABSTRACT
Craniofacial anomalies indicate early developmental disturbances and are usually linked to many genetic syndromes. Early diagnosis is critical, yet ultrasound (US) examinations often fail to identify these features. This study presents an AI-driven tool to assist clinicians in standardizing fetal facial axes/planes in 3D US, reducing sonographer workload and facilitating the facial evaluation. Our network, structured into three blocks—feature extractor, rotation and translation regression, and spatial transformer—processes three orthogonal 2D slices to estimate the necessary transformations for standardizing the facial planes in the 3D US. These transformations are applied to the original 3D US using a differentiable module (the spatial transformer block), yielding a standardized 3D US and the corresponding 2D facial standard planes. The dataset used consists of 1180 fetal facial 3D US images acquired between weeks 20 and 35 of gestation. Results show that our network considerably reduces inter-observer rotation variability in the test set, with a mean geodesic angle difference of 14.12^∘ ± 18.27^∘ and an Euclidean angle error of 7.45^∘ ± 14.88^∘. These findings demonstrate the network's ability to effectively standardize facial axes, crucial for consistent fetal facial assessments. In conclusion, the proposed network demonstrates potential for improving the consistency and accuracy of fetal facial assessments in clinical settings, facilitating early evaluation of craniofacial anomalies.
§ INTRODUCTION
Craniofacial anomalies serve as indicators of developmental disturbances at early stages of life, encompassing a wide range of heterogeneous conditions associated with many genetic syndromes <cit.>. Estimates suggest that up to 40% of genetic syndromes produce alterations in the normal morphology of the face and the head. Although these associations have predominantly been identified in adult populations, there is increasing interest in early assessment <cit.>. Consequently, diagnostic efforts are moving towards prenatal and postnatal stages <cit.>.
To evaluate the fetal development, 2D ultrasound (US) imaging is the standard procedure. Unfortunately, dysmorphology features are hard to identify in this way, due to the noisy nature of fetal US (low signal-to-noise ratio, fetal or probe movements, fetal position, and limbs in front of the face) <cit.>. Currently, 3D/4D US serves as a complement to 2D US. They prove to be particularly useful in diagnosing various fetal anomalies, especially those involving facial abnormalities, neural tube defects, and skeletal anomalies <cit.>.
In this context, acquiring an US standard plane (SP) is crucial for performing an accurate fetal diagnosis, as the SP is used to measure and analyse biomarkers and abnormal features <cit.>. 3D US has the advantage of capturing multi-view planes allowing sonographers to manually select SPs from 3D US images or videos during prenatal exams. While this process is essential, it is also time-consuming and observer-dependent. This manual selection can be laborious and biased due to the extensive search space, the sonographer experience, and the variability of the fetus orientation <cit.>.
In this study, we aim to reduce the sonographer's workload while enhancing the accuracy and interpretability of fetal facial SP detection. We propose an AI-driven tool designed to assist clinicians in standardizing the facial axes/planes in the 3D US. It aims to minimize variability across planes detection while mitigating the effects of clinician subjectivity in selecting accurate SPs for fetal facial assessment. Standardizing the fetal facial axes intends to facilitate the evaluation of facial biomarkers and biometric measurements to perform facial assessment. The proposed method consists in regressing the transformation necessary to standardize the sagittal, coronal and axial fetal facial axes, taking as input 3 orthogonal planes centered at the middle of the 3D US image. The novelty lies in that instead of combining the regression model with another task, such as the classification of the planes, we add a differentiable block that incorporates the image loss between estimated and ground truth (GT) planes as part of the minimization strategy. This helps the network learn the structures that should be present in the SPs. Additionally, the proposed algorithm offers the advantage of low computational cost and easy integration into in the echographer or in 3D Slicer as a built-in feature.
§ RELATED WORK
Several methods have been proposed for automatically detecting 2D SPs in US images using deep learning. Usually, this task has been approached as an image classification problem, using convolutional neural networks (CNNs) or recurrent neural networks (RNNs) <cit.>. However, these methods only determine whether the acquired 2D slices are SPs, but do not inform on what correction shall be applied to them in case they are not SPs.
Another common strategy consists of regressing the plane parameters or transformation matrices to achieve the SP in the 3D US volume. For example, Feng et al.<cit.> proposed a constrained marginal space learning method that combines both 2D and 3D information for fetal face detection in 3D US. Nie et al. <cit.> introduced a deep belief network combined with a detection algorithm to provide a prior structural knowledge to the network. Li et al. <cit.> presented an iterative transformation network to detect SPs in 3D fetal US using a CNN that performs both plane classification and regression to estimate the transformation parameters. Di Vece et al. <cit.> improved the previous results obtained estimating the six-dimensional pose of arbitrary oriented US plane of the fetal brain with respect to a template normalized frame using a CNN regression network. Recently, reinforcement learning (RL) has shown great potential in addressing SP localization as a regression task <cit.>. Although RL approaches have achieved high performance, several issues remain to be addressed, such as the reliance of current studies on initial registration to ensure data orientation consistency, which can easily fail if the pre-registration process is unsuccessful. Moreover, unlike parameters regression models using a CNN, RL models simplify the problem of regressing the transformation parameters by considering a discrete action space and, in consequence, limiting the transformation that can be applied.
To avoid dependence on pre-registration and ensure no limitations on the transformations that can be performed, we choose a classical yet effective parameters regression approach using a CNN. To help the network learn the structures that should be present in the SPs, we add a differentiable block that incorporates the image loss between estimated and GT planes as part of the minimization strategy. As a result, the number of parameters of the network is significantly reduced because no classification blocks are used.
§ METHOD
§.§.§ Data & Pre-processing:
The dataset used consists of 1180 fetal facial 3D US images acquired between week 20 and 35 of gestation (26.56 ± 2.72) using a Voluson E8 RSA (BT-20) with a convex probe (4D-RAB6-D, 2–8 MHz) at two hospitals in Barcelona (Hospital de Mar and Hospital Universitari Dexeus) according to their Ethical Research Committee and the current legislation (Organic Law 15/1999). The study population comprises subjects from low-risk pregnancies, meaning without any pathology, or known family cases of craniofacial or syndromic pathologies, which were all carried to term. The data is divided into training, validation, and test sets, with 72%, 12%, and 16% of the data allocated to each set, respectively. To facilitate the use of deep learning, it is essential to standardize the input image size across the training, validation, and test sets. We implemented this through two steps: 1) down-sampling the 3D US image by a factor of two, to reduce the computational cost of the network; 2) symmetric zero-padding to the center of the 3D US to achieve a size of U ∈ℝ^C× H × W × D where H,W,D = 256 are the height, width, and depth dimension and C= 1. The latter is performed to ensure that no information is cropped-out during rotation and translation. The 2D initial planes are defined by I_0 = [I_s, I_c, I_a] with I_s = U(1,H/2, : , :), I_c = U(1,;, W/2 , :) and I_a = U(1,;, ; , D/2), corresponding to the sagittal, coronal, and axial planes, respectively.
§.§.§ Ground Truth Standard Planes:
The facial GT SPs we are interested in locating are the axial, coronal and sagittal planes that define the canonical axes of the fetal face. They are obtained by minimizing the 3 orthogonal planes defined by 23 anatomical landmarks located by expert clinicians in the 3D US (see Appendix Fig. 1) and following the recommendations from the international 3D focus group <cit.>. We constrain the planes' normal vectors n_a ,n_c, and n_s to be orthonormal. The center of the planes c = (c_x, c_y, c_z)^T is defined as the intersection point of the 3 planes. The GT is obtained using custom code in 3D Slicer. The extracted normal vectors of the 3 orthogonal planes are used to compute the rotation matrix R_gt∈ℝ^3×3 needed to standardize the image axes to the estimated facial SPs. The GT rotation can be written as the change-of-basis matrix S_𝔹_1 →𝔹_2 from 𝔹_1 →𝔹_2. In our case, 𝔹_1 coordinates correspond to the canonical basis and 𝔹_2 coordinates are the estimated normal vectors. Thus,
R_gt= S_𝔹_1 →𝔹_2 = [ n_s^T; n_c^T; n_a^T ] = [ r_11 r_12 r_13; r_21 r_22 r_23; r_31 r_32 r_33; ]
The rotation regression is performed in terms of quaternion representation for compactness, i.e., q_gt = (q_0, q_1, q_2, q_3)^T. The conversion from rotation matrix to quaternion follows
q_gt = [ q_0; q_1; q_2; q_3; ]
=[ 1/2√(1 + r_11 - r_22 - r_33); r_12 + r_21/4 q_0; r_13 + r_31/4 q_0; r_23 - r_32/4 q_0; ]
The intersection/center of the planes is the GT translation t_gt = c∈ℝ^3. Then, the GT transformation matrix is
θ_gt = [ R_gt , t_gt; ] =[ r_11 r_12 r_13 t_x; r_21 r_22 r_23 t_y; r_31 r_32 r_33 t_z; ]∈ℝ^3×4
To ensure compatibility with the spatial transformer block, the translation is expressed in image relative size. Thus, each component of t_gt is in the range [-1,1]. The 2D GT sagittal, coronal and axial SPs are obtained as I_gt = [I_s, I_c, I_a] where I_s = V_gt(1,H/2, : , :), I_c = V_gt(1,;, W/2 , :) and I_a = V_gt(1,;, ; , D/2), and V_gt is the transformed US using θ_gt.
§.§.§ Feature Extractor Block:
The inputs of the feature extractor block (I_0 ∈ℝ^H× W × 3) are the 3 orthogonal planes located at the center of the 3D US image (sagittal, coronal and axial plane). Each branch uses the AG-SonoNet <cit.> as the backbone feature extractor, with the weights shared among the three branches. Then, each view has a specialized aggregation block that adds the attention information from multiple layers of the network to extract specialized features from each view. This information is concatenated and fed to the translation and rotation regression block.
§.§.§ Translation & Rotation Regression Block:
It consists of two fully connected layers that convert the extracted features from the three orthogonal planes into the translation and rotation necessary to achieve the standardized axes/planes. The output is the regression vector z∈ℝ^7. The first three positions correspond to the translation vector t_es∈ℝ^3 where t_es= (t_x,t_y,t_z)^T with each component being in the range [-h_max,h_max]. The remaining 4 positions correspond to the quaternion representation of the rotation matrix q_es= (q_0,q_1,q_2,q_3)^T with each component being in the range [-1,1]. To represent a valid rotation, ||q_es|| needs to be 1. To ensure that this condition is satisfied, a normalization layer was added after the last fully connected layer. Given q_es, the estimated rotation can be expressed as:
R_es = [ 1 - 2(q_2^2 + q_3^2) 2(q_1q_2 - q_0q_3) 2(q_1q_3 + q_0q_2); 2(q_1q_2 + q_0q_3) 1 - 2(q_1^2 + q_3^2) 2(q_2q_3 - q_0q_1); 2(q_1q_3 - q_0q_2) 2(q_2q_3 + q_0q_1) 1 - 2(q_1^2 + q_2^2) ] = [ r_11 r_12 r_13; r_21 r_22 r_23; r_31 r_32 r_33; ]
§.§.§ Spatial Transformer Block:
It is a differentiable module capable of applying spatial transformations to the original 3D US image U ∈ R^H× W × D × C, resulting in a new standardized 3D US image V ∈ R^H × W × D × C and facial 2D SPs. The spatial transformer uses a differentiable 3D bi-linear sampling as defined in <cit.>. Each output value for pixel i can be written as
V_i = ∑^H _n ∑^W_m ∑^D_l U^c_n,m,lmax(0,1-|x^s_i -m|) max(0,1-|y^s_i -n|) max(0,1-|z^s_i -l|)
where (x_i^inp, y_i^inp, z_i^inp) are the input coordinates that define the sampling points in the original 3D US U and U_nml is the value of U at location (n,m,l). The output coordinates (x_i^out, y_i^out, z_i^out) are defined to lie on a regular grid G = G_i of pixels G_i = (x^out_i , y^out_i , z^out_i) and can be obtained by a 3D affine transformation:
[ x^inp_i; y^inp_i; z^inp_i; ] = 𝒯_θ(G) = θ(q,t) [ x^out_i; y^out_i; z^out_i; 1; ] =[ r_11 r_12 r_13 t_x; r_21 r_22 r_23 t_y; r_31 r_32 r_33 t_z; ][ x^out_i; y^out_i; z^out_i; 1; ]
θ (q,t) is the 3D transformation matrix estimated. We use height, weight and depth normalized coordinates, such that x_i,y_i,z_i ∈ [-1,1]. The 2D estimated SPs are obtained as I_es = [I_s, I_c, I_a] where I_s = V(1,H/2, : , :), I_c = V(1,;, W/2 , :) and I_a = V(1,;, ; , D/2).
§.§.§ Cumulative Transformations & Initialization:
At initialization time (it=0), R_0 is a random rotation defined by the Euclidean angles α_x, α_y, α_z ∈ [-20,20] degrees and t_0 =(t_x,t_y,t_z)^T is random translation with each component being in the range [-0.05, 0.05]. To preserve image quality, we accumulate the transformations and perform a unique transformation to the input 3D image. If multiple steps of the network are applied, we define the transformations at step it as R_es^it = R_es^it-1 R_es and t_es^it= t_es^it-1 + t_es, whereas R_gt^it = R_es^-1 R_gt^it-1 and t_gt^it = - (R_es^it-1)^-1t_es + (R_es^it-1)^-1t_gt. Here, R_es, t_es correspond to the rotation and translation estimated by the CNN at the current iteration, whereas R_es^it, t_es^it denote the accumulated rotation and translation at iteration it. We found that 3 iterations are enough to improve performance by refining the SP estimates. However, there was no need to train the network in an iterative way.
§.§.§ Network Loss:
It
is defined as a combination of the mean absolute error (MAE) between the GT and estimated translation, the relative angle (SO3) between the GT and estimated rotation and the image loss between computed as the Frobenius norm of the difference between the GT and estimated SPs:
ℒ = β ||t_es - t_gt||_1 + γ acos(0.5 * (Tr(R_es (R_gt)^T)-1))+ ||I_gt - I_es||_1
where Tr correspond to the trace, β and γ are the translation and rotation weights in the loss, I_gt and I_es are the 2D GT and estimated slices corresponding to the fetal facial sagittal, coronal and axial SPs.
§ EXPERIMENTS
The proposed method is compared to the inter-observer variability obtained from 3 different observers placing landmarks on the 3D US images as described in González-Aranceta et al. <cit.>, which are then used for estimating the planes as described in Section <ref>. Next, we compare to the state of the art method proposed by Li et al. <cit.>. For a fair comparison, the network used is their M1 baseline model with the addition of the differential spatial transformer model to be able to train with the image loss. The only task learned is the regression of t,qusing as input 3 orthogonal planes. The predicted SPs/axes are evaluated in the test set against the GT using the distance between the GT and the estimated translation, and the rotation angles between the GT and the estimated rotation. The image similarity of the planes is also measured using the peak to noise-ratio (PSNR) and structural similarity index (SSIM).
§ RESULTS & DISCUSSION
Table <ref> summarizes the results obtained. The proposed method outperforms the state of the art method from Li et al. <cit.> and even challenges inter-observer variability, producing smaller angular errors although larger translation errors.
Table <ref> shows the translation and rotation Euclidean angles error obtained per SP/axes. It highlights that the coronal and sagittal planes are more challenging to locate in terms of rotation than the axial plane. Our approach reduces the inter-observer rotation error of the sagittal, axial and coronal planes. The translation error obtained is around 6mm per plane/axes, higher than the inter-observer error. Fig. <ref> shows some qualitative examples and comparisons between the GT 2D planes and the estimated by the proposed method. Despite a translation error per plane of approximately 6 mm in patient 2 and 3, the estimated plane closely approximates the GT plane.
Thus, the proposed network correctly learns the plane angles but its translation error is higher than the inter-observer error. This could be due to the high variability defining the planes localization, as multiple slices could closely resemble each other <cit.>. However, it could also be that the most informative planes for the network are not the ones defined as GT planes and it sacrifices translation accuracy for rotation accuracy. Although the translation errors obtained with the proposed network are larger than the inter-observer, the mean translation error per plane is around 6 mm. Moreover, rotation can be regarded more important than translation, as the aim is to standardize the facial axes. This allows the sonographer to examine the standardized fetal 3D facial US, where the fetal pose/US probe rotations are removed, facilitating the evaluation of the fetal face. Furthermore, the proposed network is able to reduce the rotation inter-observer variability. This can be highly beneficial to homogenize the facial analysis evaluation criteria and to reduce the reliance on clinician expertise while reducing the time burden of manually locating the planes.
§ CONCLUSIONS
We propose a network that estimates the transformation necessary to obtain the standard sagittal, coronal and axial facial US axes taking as input three 2D orthogonal planes. Evaluation on 184 US volumes shows that the network correctly standardizes the US 3D axes while reducing the rotation variability across observers. The method has the potential to be applied easily in a clinical setting due to its low computational cost. The standardization of the facial 3D US aims to facilitate the analysis of the facial biometric measurements to assess the presence of craniofacial abnormalities.
§.§.§
This work was partly supported by grants PID2020-114083GB-I00 and PRE2021-097544 funded by MICIU/AEI/10.13039/501100011033/ and under the ICREA Academia programme.
§.§.§
None of the authors have any competing interests.
splncs03
|